• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

The design of vague spatial data warehouses

Siqueira, Thiago Luís Lopes 07 December 2015 (has links)
Made available in DSpace on 2016-06-02T19:04:00Z (GMT). No. of bitstreams: 1 6824.pdf: 22060515 bytes, checksum: bde19feb7a6e296214aebe081f2d09de (MD5) Previous issue date: 2015-12-07 / Universidade Federal de Minas Gerais / O data warehouse espacial (DWE) é um banco de dados multidimensional integrado e volumoso que armazena dados espaciais e dados convencionais. Já o processamento analítico espacial online (SOLAP) permite consultar o DWE, tanto pela seleção de dados espaciais que satisfazem um relacionamento topológico, quanto pela agregação dos dados espaciais. Deste modo, DWE e SOLAP beneficiam o suporte a tomada de decisão. As aplicações de DWE e SOLAP abordam majoritarimente fenômenos representados por dados espaciais exatos, ou seja, que assumem localizações e fronteiras bem definidas. Contudo, tais aplicações negligenciam dados espaciais afetados por imperfeições, tais como a vagueza espacial, a qual interfere na identificação precisa de um objeto e de seus vizinhos. Um objeto espacial vago não tem sua fronteira ou seu interior precisamente definidos. Além disso, é composto por partes que certamente pertencem a ele e partes que possivelmente pertencem a ele. Apesar de inúmeros fenômenos do mundo real serem caracterizados pela vagueza espacial, na literatura consultada não se identificaram trabalhos que considerassem a vagueza espacial no projeto de DWE e nem para consultar o DWE. Tal limitação motivou a elaboração desta tese de doutorado, a qual introduz os conceitos de DWE vago e de SOLAP vago. Um DWE vago é um DWE que armazena dados espaciais vagos, enquanto que SOLAP vago provê os meios para consultar o DWE vago. Nesta tese, o projeto de DWE vago é abordado e as principais contribuições providas são: (i) o modelo conceitual VSCube que viabiliza a criação de um cubos de dados multidimensional para representar o esquema conceitual de um DWE vago; (ii) o modelo conceitual VSMultiDim que permite criar um diagrama para representar o esquema conceitual de um DWE vago; (iii) diretrizes para o projeto lógico do DWE vago e de suas restrições de integridade, e para estender a linguagem SQL visando processar as consultas de SOLAP vago no DWE vago; e (iv) o índice VSB-index que aprimora o desempenho do processamento de consultas no DWE vago. A aplicabilidade dessas contribuições é demonstrada em dois estudos de caso no domínio da agricultura, por meio da criação de esquemas conceituais de DWE vago, da transformação dos esquemas conceituais em esquemas lógicos de DWE vago, e do processamento de consultas envolvendo as regiões vagas do DWE vago. / Spatial data warehouses (SDW) and spatial online analytical processing (SOLAP) enhance decision making by enabling spatial analysis combined with multidimensional analytical queries. A SDW is an integrated and voluminous multidimensional database containing both conventional and spatial data. SOLAP allows querying SDWs with multidimensional queries that select spatial data that satisfy a given topological relationship and that aggregate spatial data. Existing SDW and SOLAP applications mostly consider phenomena represented by spatial data having exact locations and sharp boundaries. They neglect the fact that spatial data may be affected by imperfections, such as spatial vagueness, which prevents distinguishing an object from its neighborhood. A vague spatial object does not have a precisely defined boundary and/or interior. Thus, it may have a broad boundary and a blurred interior, and is composed of parts that certainly belong to it and parts that possibly belong to it. Although several real-world phenomena are characterized by spatial vagueness, no approach in the literature addresses both spatial vagueness and the design of SDWs nor provides multidimensional analysis over vague spatial data. These shortcomings motivated the elaboration of this doctoral thesis, which addresses both vague spatial data warehouses (vague SDWs) and vague spatial online analytical processing (vague SOLAP). A vague SDW is a SDW that comprises vague spatial data, while vague SOLAP allows querying vague SDWs. The major contributions of this doctoral thesis are: (i) the Vague Spatial Cube (VSCube) conceptual model, which enables the creation of conceptual schemata for vague SDWs using data cubes; (ii) the Vague Spatial MultiDim (VSMultiDim) conceptual model, which enables the creation of conceptual schemata for vague SDWs using diagrams; (iii) guidelines for designing relational schemata and integrity constraints for vague SDWs, and for extending the SQL language to enable vague SOLAP; (iv) the Vague Spatial Bitmap Index (VSB-index), which improves the performance to process queries against vague SDWs. The applicability of these contributions is demonstrated in two applications of the agricultural domain, by creating conceptual schemata for vague SDWs, transforming these conceptual schemata into logical schemata for vague SDWs, and efficiently processing queries over vague SDWs.
412

Análise de desempenho de consultas OLAP espaçotemporais em função da ordem de processamento dos predicados convencional, espacial e temporal

Joaquim Neto, Cesar 08 March 2016 (has links)
Submitted by Daniele Amaral (daniee_ni@hotmail.com) on 2016-10-07T20:05:05Z No. of bitstreams: 1 DissCJN.pdf: 5948964 bytes, checksum: e7e719e26b50a85697e7934bde411070 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:30:58Z (GMT) No. of bitstreams: 1 DissCJN.pdf: 5948964 bytes, checksum: e7e719e26b50a85697e7934bde411070 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-20T19:31:04Z (GMT) No. of bitstreams: 1 DissCJN.pdf: 5948964 bytes, checksum: e7e719e26b50a85697e7934bde411070 (MD5) / Made available in DSpace on 2016-10-20T19:31:09Z (GMT). No. of bitstreams: 1 DissCJN.pdf: 5948964 bytes, checksum: e7e719e26b50a85697e7934bde411070 (MD5) Previous issue date: 2016-03-08 / Não recebi financiamento / By providing ever-growing processing capabilities, many database technologies have been becoming important support tools to enterprises and institutions. The need to include (and control) new data types to the existing database technologies has brought also new challenges and research areas, arising the spatial, temporal, and spatiotemporal databases. Besides that, new analytical capabilities were required facilitating the birth of the data warehouse technology and, once more, the need to include spatial or temporal data (or both) to it, thus originating the spatial, temporal, and spatio-temporal data warehouses. The queries used in each database type had also evolved, culminating in the STOLAP (Spatio Temporal OLAP) queries, which are composed of predicates dealing with conventional, spatial, and temporal data with the possibility of having their execution aided by specialized index structures. This work’s intention is to investigate how the execution of each predicate affects the performance of STOLAP queries by varying the used indexes, their execution order and the query’s selectivity. Bitmap Join Indexes will help in conventional predicate’s execution and in some portions of the temporal processing, which will also count with the use of SQL queries for some of the alternatives used in this research. The SB-index and HSB-index will aid the spatial processing while the STB-index will be used to process temporal and spatial predicates together. The expected result is an analysis of the best predicate order while running the queries also considering their selectivity. Another contribution of this work is the evolution of the HSB-index to a hierarchized version called HSTB-index, which should complement the execution options. / Por proverem uma capacidade de processamento de dados cada vez maior, várias tecnologias de bancos de dados têm se tornado importantes ferramentas de apoio a empresas e instituições. A necessidade de se incluir e controlar novos tipos de dados aos bancos de dados já existentes fizeram também surgir novos desafios e novas linhas de pesquisa, como é o caso dos bancos de dados espaciais, temporais e espaçotemporais. Além disso, novas capacidades analíticas foram se fazendo necessárias culminando com o surgimento dos data warehouses e, mais uma vez, com a necessidade de se incluir dados espaciais e temporais (ou ambos) surgindo os data warehouses espaciais, temporais e espaço-temporais. As consultas relacionadas a cada tipo de banco de dados também evoluíram culminando com as consultas STOLAP (Spatio-Temporal OLAP) que são compostas basicamente por predicados envolvendo dados convencionais, espaciais e temporais e cujo processamento pode ser auxiliado por estruturas de indexação especializadas. Este trabalho pretende investigar como a execução de cada um dos tipos de predicados afeta o desempenho de consultas STOLAP variando-se os índices utilizados, a ordem de execução dos predicados e a seletividade das consultas. Índices Bitmap de Junção auxiliarão na execução dos predicados convencionais e de algumas partes dos predicados temporais que também contarão com o auxílio de consultas SQL, enquanto os índices SB-index e HSB-index serão utilizados para auxiliar na execução dos predicados espaciais das consultas. O STB-index também será utilizado nas comparações e envolve ambos os predicados espacial e temporal. Espera-se obter uma análise das melhores opções de combinação de execução dos predicados em consultas STOLAP tendo em vista também a seletividade das consultas. Outra contribuição deste trabalho é a evolução do HSB-index para uma versão hierarquizada chamada HSTB-index e que servirá para complementar as opções de processamento de consultas STOLAP.
413

Desenvolvimento de indicadores da manufatura enxuta utilizando ferramentas de business intelligence : uma aplicação na manufatura de calçados

Escodeiro, José Roberto 27 February 2009 (has links)
Made available in DSpace on 2016-06-02T19:51:39Z (GMT). No. of bitstreams: 1 2385.pdf: 3560559 bytes, checksum: 71e6c5c7aca71d06463f1bc0bc063005 (MD5) Previous issue date: 2009-02-27 / In the past decades companies around the world have implemented their concepts of Lean Manufacture (LM) with their main focus on business. Together with LM s implementation comes the need of continuous improvement as several sources and historical data volume originate from its processing. Whenever those historical data are, for any reason, discarded without generating indicators the oportunity of transforming data into strategic information is missed. Such situation brings about the need of an adequate performance measurement system of LM. Having this constant monitoring need of LM s performance in mind as a strategic mean for a company to achieve competitivity in the market, this paper aims to develop through Information Technology (IT) and the tools of Business Intelligence (BI) a proposal of managing performance indicators of LM present in shoe production, as an alternative to improve decision making. In order to develop this research, an overview of the literature of LM, BI, Information Systems (IS), performance indicators and shoe manufacturing from the basic concept of LM is offered that, on the other hand, has to do with cost reduction and total cut of waste. This study leads to the seven waste groups of LM, which, together with other published works, can guide and focus so as to reach indicators. By setting indicators and pointing strategy, and also collecting data, the next step is modelling and developing data load in dimentional format prepared to use BI s tools as On-line Analytical Processing (OLAP). Finally, the application is tested in a shoe industry with the load of waste indicator for over production. The final result of the research is a series of analyses of waste information for over production as well as the contribution of the proposed method in terms of easiness, flexibility and practical availability in companies. / Nas últimas décadas, empresas do mundo inteiro têm implementado os conceitos da Manufatura Enxuta (ME) com objetivo estratégico para os negócios. Conforme a manufatura enxuta é implementada, surge também a necessidade de melhoria contínua, e conseqüentemente aparecem várias fontes e volume de dados histórico advindos do seu processamento. Quando esses dados históricos por qualquer motivo são descartados sem gerar indicadores, é desperdiçada a oportunidade de transformar os dados em informação estratégica. Esta situação expõe a necessidade de um adequado sistema de medição de desempenho da manufatura enxuta. Entendendo esta necessidade de monitoramento constante do desempenho da Manufatura Enxuta (ME) como um fator estratégico para as empresas obterem competitividade no mercado; este trabalho busca desenvolver através da Tecnologia da Informação (TI) e das ferramentas de Business Intelligence (BI) uma proposta de gerenciamento de indicadores de desempenho da Manufatura Enxuta (ME) presentes na produção de calçados, como uma alternativa para melhorar a tomada de decisão. Para o desenvolvimento do trabalho foi feita uma revisão da literatura de Manufatura Enxuta (ME), Sistemas de Informação (SI), Business Intelligence (BI), indicadores de desempenho e manufatura de calçados com base no conceito da Manufatura Enxuta (ME), que é a redução de custo pela total eliminação de desperdícios. Isto leva aos sete grupos de desperdício da Manufatura Enxuta (ME), que por sua vez, em conjunto com outros trabalhos publicados, servem de guia e foco para encontrar os indicadores. Com a definição dos indicadores e a estratégia de apontamento e carga dos dados, o próximo passo é uma modelagem e desenvolvimento da carga dos dados em formato dimensional, preparado para utilização de ferramentas de Business Intelligence (BI) como On-line Analytical Processing (OLAP). Por último a aplicação é testada numa indústria de calçados com a carga do indicador de perda por super produção. O resultado final do trabalho é uma série de análises das informações de perda por superprodução, assim como a contribuição do método proposto em termos de facilidade, flexibilidade e viabilidade para uso prático nas empresas.
414

Uso do data mining no estabelecimento de relacionamentos entre medidas de desempenho.

Custodio, Flavio Augusto 30 September 2004 (has links)
Made available in DSpace on 2016-06-02T19:52:06Z (GMT). No. of bitstreams: 1 DissFAC.pdf: 1641656 bytes, checksum: 3e48b5a2633d9ec682a617bdd738dac7 (MD5) Previous issue date: 2004-09-30 / Universidade Federal de Sao Carlos / This work aims to propose a method to analyze the relationships between performance measures in a Performance Measurement System using historical performance data storaged in a datawarehouse or operational data store. There is a problem in the performance measurement area that it doesn t have methods to create relationships models between performance measures. The present methods that we have in academic researches don t help to build the relationships concerning historical performance data. Therefore, there is a trend to build the relationship between performance measures to reflect the desirable future, but it is also true that we have to learn about the past actions. Nowadays, with the increasing complexity in the organizations environment it is very difficulty to handle historical data about performance to identify relationship patterns without using concepts, techniques and tools of the Information Technology (IT) field. The variables contained in the performance measurement models are increasing continually so it is important to understand the complex net of relationships between performance measures in an organization. The stakeholders in the organization see the relationships between performance measures as trivial, but this doesn t help because the relationships are partial and subjective and the stakeholders that articulate the variables in most of the cases are accountable by the performance. It s expected that decision makers participate and share their models of relationships between performance measures and that it be the most comprehensive as possible. This work is important because it proposes to use the data mining philosophy to help building a method to understand relationship between performance measures with performance historical data. Hence, it will be possible to define and communicate the relationships between performance measures to the users of the organization and increase the use of performance measurement models. The proposed method presents a process to build and find relationships between performance measures data using data mining techniques. The IDEF0 procedure was used to present our approach. / O objetivo deste trabalho é propor um método para o estabelecimento dos relacionamentos entre as medidas de desempenho de um sistema de medição de desempenho a partir de dados históricos sobre desempenho armazenados em um banco de dados, utilizando a abordagem data mining. Um problema no campo da medição de desempenho é a falta de métodos de criação de modelos de relacionamentos entre as medidas de desempenho. Os existentes, encontrados na literatura, não tratam de como construir o relacionamento a partir de dados históricos de desempenho. Além disso, existe uma tendência de estabelecer o relacionamento esperado de forma que a medição de desempenho reflita o futuro desejado. Entretanto, é de grande valia aprender por intermédio daquilo que já foi feito, ou seja, pelas ações passadas. Com o aumento da complexidade das organizações, fica um tanto quanto difícil manipular dados históricos sobre desempenho para a identificação de padrões de relacionamento sem lançar mão de conceitos, técnicas e ferramentas da tecnologia de informação. Em face de o número de variáveis envolvidas ser cada vez maior, é importante a busca do entendimento da complexa teia de relacionamento existente entre as medidas de desempenho numa organização. Este relacionamento é visto pelas pessoas nas organizações como algo corriqueiro. Entretanto, o que pode ser improdutivo é que esses relacionamentos são parciais e pessoais, visando a articular as variáveis por cujo desempenho as pessoas, na maioria dos casos, tinham responsabilidade. O ideal é que a maioria dos tomadores de decisão compartilhem do mesmo modelo de relacionamento entre as medidas de desempenho e que ele fosse a mais abrangente possível. Portanto, a relevância deste trabalho é procurar desenvolver uma forma de aplicação da abordagem data mining a fim de auxiliar na construção de um método para o estabelecimento dos relacionamentos entre as medidas de desempenho com base em dados de desempenho históricos. Assim, será possível formalizar e disseminar o relacionamento entre as medidas de desempenho para uma gama maior de pessoas numa organização, podendo melhorar o uso da medição de desempenho. O método proposto procura abranger todo o processo de construção do relacionamento com aplicação de data mining e não somente a aplicação de uma ou outra técnica especifica dele. A apresentação da proposta é feita utilizando-se a prática IDEF0.
415

Sistema de análise das origens e destinos de produtos do Polo Industrial de Manaus

Bezerra, Alessandro de Souza 09 September 2011 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-02-18T18:03:51Z No. of bitstreams: 1 Dissertação - Alessandro de Souza Bezerra.pdf: 3433195 bytes, checksum: 8b7faa5ebd0666364fa0996a4c75d757 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-02-18T19:36:47Z (GMT) No. of bitstreams: 1 Dissertação - Alessandro de Souza Bezerra.pdf: 3433195 bytes, checksum: 8b7faa5ebd0666364fa0996a4c75d757 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2016-02-18T19:38:49Z (GMT) No. of bitstreams: 1 Dissertação - Alessandro de Souza Bezerra.pdf: 3433195 bytes, checksum: 8b7faa5ebd0666364fa0996a4c75d757 (MD5) / Made available in DSpace on 2016-02-18T19:38:49Z (GMT). No. of bitstreams: 1 Dissertação - Alessandro de Souza Bezerra.pdf: 3433195 bytes, checksum: 8b7faa5ebd0666364fa0996a4c75d757 (MD5) Previous issue date: 2011-09-09 / Não informada / According to SUFRAMA (2011), the Industrial Pole of Manaus (PIM) has more than 300 industries, generating more than 100 000 jobs directly and indirectly, configured as the primary model of economic and social development of northern Brazil. Since the creation of the Manaus Free Trade Zone in 1967, one of the main objectives is to increase its competitiveness. The scarcity of information on strategic logistics operations is an aggravating factor in the slow process of planning and development of solutions that are effective to increase the competitiveness of the model. This paper presents the development of a system analysis of the origins and destinations of inputs and outputs of the PIM, by mapping and identifying sites of origin of inputs and destinations of products produced in the PIM. The project identifies the sources of data related to the origins and destinations of inputs and outputs of the PIM, also develops a DW environment able to keep such information and provides access to them through the development of a consultation with the DW system, accessible by a website. The result of this research is a system that can assist in planning studies related to transport infrastructure, and a number of other studies that require data on the flow of charges in the PIM. / De acordo com a SUFRAMA (2011), o Polo Industrial de Manaus (PIM) possui mais de 300 indústrias, gerando mais de 100 mil empregos diretos e indiretos, configurando-se como o principal modelo de desenvolvimento econômico-social da região norte do Brasil. Desde a criação da Zona Franca de Manaus, em 1967, um dos principais objetivos é o aumento de sua competitividade. A escassez de informações estratégicas sobre operações logísticas é um fator agravante no lento processo de planejamento e desenvolvimento de soluções que se mostrem eficazes para o incremento da competividade do modelo. Este trabalho apresenta o desenvolvimento de um sistema de análise das origens e destinos de produtos do PIM, através do mapeamento e identificação dos locais de origem e locais de destino dos produtos produzidos no PIM. O projeto identifica as fontes de dados relacionadas às origens e destinos de produtos do PIM, desenvolve também um ambiente de Data Warehouse (DW) capaz de manter tais informações e disponibiliza o acesso às mesmas através do desenvolvimento de um sistema de consulta ao DW, acessível por um sítio na internet. O resultado desta pesquisa é um sistema que pode auxiliar em estudos relacionados a planejamento de infraestrutura de transportes e uma série de outras pesquisas que necessitem de dados sobre o fluxo de cargas do PIM.
416

Key Success Factors in Business Intelligence

Adamala, Szymon, Cidrin, Linus January 2011 (has links)
Business Intelligence can bring critical capabilities to an organization, but the implementation of such capabilities is often plagued with problems and issues. Why is it that certain projects fail, while others succeed? The theoretical problem and the aim of this thesis is to identify the factors that are present in successful Business Intelligence projects and organize them into a framework of critical success factors. A survey was conducted during the spring of 2011 to collect primary data on Business Intelligence projects. It was directed to a number of different professionals operating in the Business Intelligence field in large enterprises, primarily located in Poland and primarily vendors, but given the similarity of Business Intelligence initiatives across countries and increasing globalization of large enterprises, the conclusions from this thesis may well have relevance and be applicable for projects conducted in other countries. Findings confirm that Business Intelligence projects are wrestling with both technological and nontechnological problems, but the non-technological problems are found to be harder to solve as well as more time consuming than their technological counterparts. The thesis also shows that critical success factors for Business Intelligence projects are different from success factors for IS projects in general and Business Intelligences projects have critical success factors that are unique to the subject matter. Major differences can be predominately found in the non-technological factors, such as the presence of a specific business need to be addressed by the project and a clear vision to guide the project. Results show that successful projects have specific factors present more frequently than nonsuccessful. Such factors with great differences are the type of project funding, business value provided by each iteration of the project and the alignment of the project to a strategic vision for Business Intelligence. Furthermore, the thesis provides a framework of critical success factors that, according to the results of the study, explains 61% of variability of success of projects. Given these findings, managers responsible for introducing Business Intelligence capabilities should focus on a number of non-technological factors to increase the likelihood of project success. Areas which should be given special attention are: making sure that the Business Intelligence solution is built with end users in mind, that the Business Intelligence solution is closely tied to company‟s strategic vision and that the project is properly scoped and prioritized to concentrate on best opportunities first. Keywords: Critical Success Factors, Business Intelligence, Enterprise Data Warehouse Projects, Success Factors Framework, Risk Management
417

Faktorer som styr val av data warehouse arkitektur : Inmon vs. Kimball / Factors that control the choice of data warehouse architecture : Inmon vs. Kimball

Karim, Mona January 2017 (has links)
Data warehouse (DW) solutions are becoming increasingly popular to implement in different organizations. There are a variety of motivational factors for activities to acquire DW, among other things, it helps with analyzes and decision making in the business to gain for an example competitiveness. DW projects are expensive projects that require a lot of resources from the organization side. However, more and more DW projects fail and are not optimal for business purposes. Before the organization implements a DW, it is important to choose architecture based on, i.e. a data model for how data is stored and structured in the DW. The two dominant architecture models are top-down as Inmon presented in the 90s and bottom-up who Kimball introduced after Inmon announced his model.  Both Inmon and Kimball have their own philosophies and models for how a DW solution should be model. Organization dilemma is choosing one or the other performance. The choice depends on many factors and considerations. There are also significant philosophical debates, obstacles, also pros and cons for choice of the data warehouse architecture (Lawyer & Chowdhury, 2004). Therefore, this study should demonstrate which factors govern the choice of architecture using a literature study comparing these models. The result shows that Inmon's top-down approach handles factors like data quality, how data can be integrated from different source systems, flexibility, metadata management, and handle data sources and ETL better than Kimball's bottom-up approach. Kimball's architecture model focuses more on factors such as performance and end-user interaction. The results also show that the Kimball model is easier and faster to implement. / Data warehouse (DW) lösningar blir allt mer populära att implementera hos verksamheter. Det finns en mängd motivationsfaktorer för verksamheter att inskaffa DW, bland annat att systemet hjälper med analyser och beslutsunderlag i verksamheten för att erhålla exempelvis konkurrenskraft. DW-projekt är dyra projekt som kräver en hel del resurs från verksamhets sida. Dock misslyckas allt fler sådana projekt och resulterar i att inte vara optimala för verksamhetsändamålen.  Innan verksamheten implementerar ett DW är det viktigt att välja en arkitektur att utgå ifrån, alltså en datamodell för hur data ska lagras och struktureras i DW. De två dominerande arkitekturmodellerna är top-down som Inmon presenterade på 90-talet och bottom-up som Kimball introducerade efter att Inmon presenterat sin modell.   Både Inmon och Kimball har sina egna filosofier och modeller för hur en DW-lösning ska se ut. Dilemmat för verksamheter handlar om att välja det ena eller det andra utförandet. Valet beror på många faktorer och överväganden. Det finns också betydande filosofiska debatter, hinder, och för-och nackdelar med valet av ett data warehouse arkitektur (Lawyer & Chowdhury, 2004).Följaktligen skall denna studie påvisa vilka faktorer som styr val av arkitektur genom att tillämpa en litteraturstudie där dessa modeller jämförs.  Av resultatet framgår att Inmons top-down approach hanterar faktorer som datakvalitet, hur data kan integreras från olika källsystem, flexibilitet, metadatahantering samt att den hanterar datakällor och ETL på ett bättre sätt än Kimballs bottom-up approach. Kimballs arkitektursmodell fokuserar mer på faktorer som prestanda och slutanvändarinteraktion. Av resultatet framgår även att Kimballs modell implementeras enklare och snabbare.
418

Design of Data Warehouse and Business Intelligence System : A case study of Retail Industry

Oketunji, Temitope, Omodara, Olalekan January 2011 (has links)
Business Intelligence (BI) concept has continued to play a vital role in its ability for managers to make quality business decision to resolve the business needs of the organization. BI applications comes handy which allows managers to query, comprehend, and evaluate existing data within their organizations in order to obtain functional knowledge which then assist them in making improved and informed decisions. Data warehouse (DW) is pivotal and central to BI applications in that it integrates several diverse data sources, mainly structured transactional databases. However, current researches in the area of BI suggest that, data is no longer always presented in only to structured databases or format, but they also can be pulled from unstructured sources to make more power the managers’ analysis. Consequently, the ability to manage this existing information is critical for the success of the decision making process. The operational data needs of an organization are addressed by the online transaction processing (OLTP) systems which is important to the day-to-day running of its business. Nevertheless, they are not perfectly suitable for sustaining decision-support queries or business questions that managers normally needs to address. Such questions involve analytics including aggregation, drilldown, and slicing/dicing of data, which are best supported by online analytical processing (OLAP) systems. Data warehouses support OLAP applications by storing and maintaining data in multidimensional format. Data in an OLAP warehouse is extracted and loaded from multiple OLTP data sources (including DB2, Oracle, SQL Server and flat files) using Extract, Transfer, and Load (ETL) tools. This thesis seeks to develop DW and BI system to support the decision makers and business strategist at Crystal Entertainment in making better decision using historical structured or unstructured data.
419

A BPMN-based conceptual language for designing ETL processes

El Akkaoui, Zineb 27 June 2014 (has links)
Business Intelligence (BI) is the set of techniques and technologies that support the decision-making process by providing an aggregated insight on data in the organization. Due to the numerous potentially useful data hold by the events and applications running in the organization, the BI market calls for new technologies able to suitably exploit it for analysis wherever it is available. In particular, the Extract, Transform, and Load (ETL) processes, the fundamental BI technology responsible for integrating and cleansing organization data, must respond to these requirements.<p><p>However, the development of ETL processes is still considered to be very complex and time-consuming, to such a point that roughly 80% of the BI project effort is dedicated to the ETL development. Among the phases of ETL development life cycle, ETL modeling is a critical and laborious task. Actually, this phase produces<p>the first effective formal representation of the ETL process, i.e. ETL model, that is completely reused and refined in the subsequent phases of the development.<p><p>Typically, the ETL processes are modeled using vendor-specific ETL tools from the very beginning of development. However, these tools are unsuitable for business users since they induce overwhelming fine-grained models.<p><p>As an attempt to provide more appropriate tools to business users, vendor-independent ETL modeling languages have been proposed in the literature. Nevertheless, they still remain immature. In order to get a precise view on these languages, we conduct a survey which: i) defines a set of criteria associated to major ETL<p>requirements identified in the literature; ii) compares the surveyed conceptual languages, issued from research work, to the physical languages, issued from prominent ETL tools; and iii) studies the whole methodologies of ETL development associated<p>to these modeling languages.<p><p>The analysis of our survey reveals several drawbacks in responding to the ETL requirements. Particularly, the conceptual languages have incomplete elements for ETL modeling with few or no formalization. Several languages are only descriptive with no ability to be automatically implemented into executable code, nor are they able to be automatically maintained according to changes over time.<p><p>To address these shortcomings, we present, in this thesis, a novel approach that tackles the whole development life cycle of ETL processes. <p><p>First, we propose a new vendor-independent language aiming at modeling ETL processes similar to typical business processes, the processes responsible for managing the operations in an organization. The rational behind this proposal is to provide ETL processes with better access to data in events and applications of the organization, including fresh data, and better design capabilities such as available analysis for any users. By using the standard representation mechanism denoted BPMN (Business Process Modeling and Notation) and a classification of ETL elements resulting from a study of the most used commercial and open source ETL tools, the language enables building agile and full-edged ETL processes. We name our language BPMN4ETL to refer to BPMN for ETL processes.<p><p>Second, we build a model-driven framework that provides automatic code generation capability and ameliorates maintenance support of our ETL language. We use the Model-Driven Development (MDD) technology as it helps in developing software, particularly in automating the transformation from one phase of the software development to another. We present a set of model-to-text transformations able to produce code for different business process engines and ETL engines. Also, we depict the model-to-model transformations that automatically update the ETL models with the aim of supporting the maintenance of the generated code according to data source evolution. A demonstration using a case study is conducted as an initial validation to show that the framework covering modeling, implementation and maintenance could be used in practice.<p><p> To illustrate new concepts introduced in the thesis, mainly the BPMN4ETL language, and the implementation and maintenance framework, we use a case study from the fictitious Northwind Traders company, a retailer company that imports and exports foods from around the world. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
420

Ochrana osobních údajů v kontextu datového skladu / Protection of personal data in the context of a data warehouse

Tůma, David January 2017 (has links)
Diploma thesis deals with problematics of protection of personal data in Czech republic with link to legislation of European Union. The main subject of the thesis is detailed analysis of current legal standards and identifying the requirements resulting from the changes adopted. The found changes are practically judged from the data warehouse perspective and for each of them is presented a practical solution.

Page generated in 0.0725 seconds