• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

入口網站會員特性模式之分析與行銷策略之制訂—以國內某入口網站為例 / The Analysis of Characteristics of a Portal Site's Members and the Making of Selling Tactics: A Case Study of Taiwan's Portal Site

林佩璇, Lin, Pei-Shiung Unknown Date (has links)
隨著資訊時代的來臨,網際網路用戶數的急速成長,客戶資料大量湧入的結果造成了企業回應客戶的困難;此外,入口網站的日漸普及與市場新利基的建立,使得上網者對專業化網站的需求提高;而配合新行銷時代的來臨,新科技行銷已從以往的大眾行銷走向一對一的個人化行銷。在這樣的背景與動機驅動下,近年來資料倉儲技術的興起,在這個講求競爭及速度的時代,提供了一套真正能創造企業優勢的方法,資料倉儲與客戶關係管理也逐漸成為企業發展與競爭不可或缺之工具。  本研究乃是利用資料倉儲等相關技術,期望能達到網路市場區隔之目的,進而提供行銷相關之建議。因此,研究中定義了入口網站之會員特性分析模式,從中發掘出會員特性模式之分析結果,再配合入口網站功能階段分類表的服務內容,與研究中所訂之行銷模式,修改並制訂最後行銷策略上之參考建議。  目前國內在資料挖掘上之相關研究,乃以金融、銀行、保險等領域的應用較多,對於入口網站此領域的應用尚無相關研究出現。另外,在網際網路發達,且入口網站百家爭鳴的背景之下,如何提高會員註冊率,與如何保留住已註冊的會員,是網站經營者所需考量的兩個重要課題。有鑑於此,本研究期望帶給網路業者一個新的思考方向與啟發,也試著將資料挖掘的技術帶入網際網路的應用領域,由網站會員特性的初步分析開始,做一簡單之示範。 / With the approach of informational era, the Internet users grow rapidly and the mass production of customers' data results in the difficulty of the response from industries to customers. Moreover, the widespread use of portal sites and the establishment of new market opportunity let the Internet users raise their demand to specialized websites. Because of the coming of new selling era, the selling of new science and technology has changed from the former mass selling to one-to one individual selling. Under the drive of this kind of background and motive, Data Mining technology rises and develops recently. In the era that emphasizes competition and speed, we provide a method that creates the superiority of industries. Data Warehouse and Customer Relationship Management (CRM) is becoming the essential tool for the development and competition of the industries.  This research uses some technologies related to Data Warehouse in order to segment the Internet market, and provide the related selling suggestions. Therefore, the research defines the analytic model of characteristics of members of portal sites, discovers the analytic results, and then to cooperate with the services mentioned in the classified table of functions/stages of the portal site and the selling model made in the research, and then modifies and makes the final suggestions on selling tactics. At present, the internal researches about Data Mining are usually applied in the domains of finance, banking, and insurance, not in the domain of the portal sites. Besides, in the situation of prosperous Internet and competition of portal sites, how to raise the register rate of members and how to keep the registered members are the two important topics considered by website managers. Consequently, this research hopes to bring not only a new direction and inspiration to managers, but also a new applied domain, Internet, for Data Mining. This research will begin with a simple example that analyzes the characteristics of a Portal Sit's Members and the Making of Selling Tactics.
452

Att få en syn på datalagret : Visualisering som stöd för analytikers datalagerarbete / Getting a View of the Data Warehouse : supporting analysts through data warehouse visualization

Pettersson, Karin January 2005 (has links)
<p>Datalager används för att ge företag en samlad bild av sin verksamhet, en bild som byggs upp av analytikers statistiska beräkningar och modeller. Analytiker arbetar i datalager med hjälp av olika analysverktyg, och begränsas av dessa verktygs möjligheter att ge en förståelse av datalagrets uppbyggnad och funktion, och av möjligheterna att hitta rätt analysdata. Arbetet med att hitta och analysera data är en iterativ problemlösningsprocess för att få fram det önskade resultatet.</p><p>Visualiseringar kan fungera som ett verktyg i arbetet och stödja användares beslutsfattande. Denna kvalitativa fallstudie syftar till att undersöka hur visualisering kan användas som ett stöd för marknads- och kreditanalytikers datalagerarbete. Studien använde användarcentrerade metoder för att undersöka analytikers arbete i ett datalager. Femton kunskapsuppgifter identifierades som mål för visualiseringsstöd i analytikers datalagerarbete. Ett analysorienterat och ett systemorienterat strukturförslag för visualiseringar värderades med dessa kunskapsuppgifter som viktade mål.</p><p>Av kunskapsuppgifterna är den viktigaste att koppla analysuppgifter till systemstruktur. Det kräver att visualiseringsstödet erbjuder en analysorienterad struktur initialt och blir alltmer systemorienterat i takt med att den intressanta informationsmängden definieras. Användarcentrerade metoder användes för att identifiera kunskapsuppgifter. Studien visar att dessa kunskapsuppgifter kan användas som designmål för värdering av visualiseringsstöd.</p>
453

Towards a business process model warehouse framework

Jacobs, Dina Elizabeth 31 March 2008 (has links)
This dissertation focuses on the re-use of business process reference models, available in a business process model warehouse, to enable the definition of more comprehensive business requirements. It proposes a business process model warehouse framework to promote the re-use of multiple business process reference models and the flexible visualisation of business process models. The critical success factor for such a framework is that it should contribute to minimise to some extent the causes of inadequate business requirements. The proposed framework is based on an analogy with a data warehouse framework, consisting of the following components: usage of multiple business process reference models as source models, the conceptual design of a process to extract, load and transform multiple business process reference models into a repository, a description of repository functionality for managing enterprise architecture artefacts, and motivation of flexible visualisation of business process models to ensure more comprehensive business requirements. / Computer Science (School of Computing) / M.Sc. (Information Systems)
454

Newsminer: um sistema de data warehouse baseado em texto de notícias / Newsminer: a data warehouse system based on news websites

Nogueira, Rodrigo Ramos 12 May 2017 (has links)
Submitted by Milena Rubi (milenarubi@ufscar.br) on 2017-10-09T14:12:56Z No. of bitstreams: 1 NOGUEIRA_Rodrigo_2017.pdf: 5427774 bytes, checksum: db8155583bf1bffe3ceb4c01bf26f66f (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-10-09T14:14:04Z (GMT) No. of bitstreams: 1 NOGUEIRA_Rodrigo_2017.pdf: 5427774 bytes, checksum: db8155583bf1bffe3ceb4c01bf26f66f (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-10-09T14:14:13Z (GMT) No. of bitstreams: 1 NOGUEIRA_Rodrigo_2017.pdf: 5427774 bytes, checksum: db8155583bf1bffe3ceb4c01bf26f66f (MD5) / Made available in DSpace on 2017-10-09T14:14:24Z (GMT). No. of bitstreams: 1 NOGUEIRA_Rodrigo_2017.pdf: 5427774 bytes, checksum: db8155583bf1bffe3ceb4c01bf26f66f (MD5) Previous issue date: 2017-05-12 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Data and text mining applications managing Web data have been the subject of recent research. In every case, data mining tasks need to work on clean, consistent, and integrated data for obtaining the best results. Thus, Data Warehouse environments are a valuable source of clean, integrated data for data mining applications. Data Warehouse technology has evolved to retrieve and process data from the Web. In particular, news websites are rich sources that can compose a linguistic corpus. By inserting corpus into a Data Warehousing environment, applications can take advantage of the flexibility that a multidimensional model and OLAP operations provide. Among the benefits are the navigation through the data, the selection of the part of the data considered relevant, data analysis at different levels of abstraction, and aggregation, disaggregation, rotation and filtering over any set of data. This paper presents Newsminer, a data warehouse environment, which provides a consistent and clean set of texts in the form of a multidimensional corpus for consumption by external applications and users. The proposal includes an architecture that integrates the gathering of news in real time, a semantic enrichment module as part of the ETL stage, which adds semantic properties to the data such as news category and POS-tagging annotation and the access to data cubes for consumption by applications and users. Two experiments were performed. The first experiment selects the best news classifier for the semantic enrichment module. The statistical analysis of the results indicated that the Perceptron classifier achieved the best results of F-measure, with a good result of computational time. The second experiment collected data to evaluate real-time news preprocessing. For the data set collected, the results indicated that it is possible to achieve online processing time. / As aplicações de mineração de dados e textos oriundos da Internet têm sido alvo de recentes pesquisas. E, em todos os casos, as tarefas de mineração de dados necessitam trabalhar sobre dados limpos, consistentes e integrados para obter os melhores resultados. Sendo assim, ambientes de Data Warehouse são uma valiosa fonte de dados limpos e integrados para as aplicações de mineração. A tecnologia de Data Warehouse tem evoluído no sentido de recuperar e tratar dados provenientes da Web. Em particular, os sites de notícias são fontes ricas em textos, que podem compor um corpus linguístico. Inserindo o corpus em um ambiente de Data Warehouse, as aplicações poderão tirar proveito da flexibilidade que um modelo multidimensional e as operações OLAP fornecem. Dentre as vantagens estão a navegação pelos dados, a seleção da parte dos dados considerados relevantes, a análise dos dados em diferentes níveis de abstração, e a agregação, desagregação, rotação e filtragem sobre qualquer conjunto de dados. Este trabalho apresenta o ambiente de Data Warehouse Newsminer, que fornece um conjunto de textos consistente e limpo, na forma de um corpus multidimensional para consumo por aplicações externas e usuários. A proposta inclui uma arquitetura que integra a coleta textos de notícias em tempo próximo do tempo real, um módulo de enriquecimento semântico como parte da etapa de ETL, que acrescenta propriedades semânticas aos dados coletados tais como a categoria da notícia e a anotação POS-tagging, e a disponibilização de cubos de dados para consumo por aplicações e usuários. Foram executados dois experimentos. O primeiro experimento é relacionado à escolha do melhor classificador de categorias das notícias do módulo de enriquecimento semântico. A análise estatística dos resultados indicou que o classificador Perceptron atingiu os melhores resultados de F-medida, com resultado bom de tempo de processamento. O segundo experimento coletou dados para avaliar o pré-processamento de notícias em tempo real. Para o conjunto de dados coletados, os resultados indicaram que é possível atingir tempo de processamento online. / OB800972
455

Um framework de testes unitários para procedimentos de carga em ambientes de business intelligence

Santos, Igor Peterson Oliveira 30 August 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Business Intelligence (BI) relies on Data Warehouse (DW), a historical data repository designed to support the decision making process. Despite the potential benefits of a DW, data quality issues prevent users from realizing the benefits of a BI environment and Data Analytics. Problems related to data quality can arise in any stage of the ETL (Extract, Transform and Load) process, especially in the loading phase. This thesis presents an approach to automate the selection and execution of previously identified test cases for loading procedures in BI environments and Data Analytics based on DW. To verify and validate the approach, a unit test framework was developed. The overall goal is achieve data quality improvement. The specific aim is reduce test effort and, consequently, promote test activities in DW process. The experimental evaluation was performed by two controlled experiments in the industry. The first one was carried out to investigate the adequacy of the proposed method for DW procedures development. The Second one was carried out to investigate the adequacy of the proposed method against a generic framework for DW procedures development. Both results showed that our approach clearly reduces test effort and coding errors during the testing phase in decision support environments. / A qualidade de um produto de software está diretamente relacionada com os testes empregados durante o seu desenvolvimento. Embora os processos de testes para softwares aplicativos e sistemas transacionais já apresentem um alto grau de maturidade, estes devem ser investigados para os processos de testes em um ambiente de Business Intelligence (BI) e Data Analytics. As diferenças deste ambiente em relação aos demais tipos de sistemas fazem com que os processos e ferramentas de testes existentes precisem ser ajustados a uma nova realidade. Neste contexto, grande parte das aplicações de Business Intelligence (BI) efetivas depende de um Data Warehouse (DW), um repositório histórico de dados projetado para dar suporte a processos de tomada de decisão. São as cargas de dados para o DW que merecem atenção especial relativa aos testes, por englobar procedimentos críticos em relação à qualidade. Este trabalho propõe uma abordagem de testes, baseada em um framework de testes unitários, para procedimentos de carga em um ambiente de BI e Data Analytics. O framework proposto, com base em metadados sobre as rotinas de carga, realiza a execução automática de casos de testes, por meio da geração de estados iniciais e a análise dos estados finais, bem como seleciona os casos de testes a serem aplicados. O objetivo é melhorar a qualidade dos procedimentos de carga de dados e reduzir o tempo empregado no processo de testes. A avaliação experimental foi realizada através de dois experimentos controlados executados na indústria. O primeiro avaliou a utilização de casos de testes para as rotinas de carga, comparando a efetividade do framework com uma abordagem manual. O segundo experimento efetuou uma comparação com um framework genérico e similar do mercado. Os resultados indicaram que o framework pode contribuir para o aumento da produtividade e redução dos erros de codificação durante a fase de testes em ambientes de suporte à decisão.
456

Att få en syn på datalagret : Visualisering som stöd för analytikers datalagerarbete / Getting a View of the Data Warehouse : supporting analysts through data warehouse visualization

Pettersson, Karin January 2005 (has links)
Datalager används för att ge företag en samlad bild av sin verksamhet, en bild som byggs upp av analytikers statistiska beräkningar och modeller. Analytiker arbetar i datalager med hjälp av olika analysverktyg, och begränsas av dessa verktygs möjligheter att ge en förståelse av datalagrets uppbyggnad och funktion, och av möjligheterna att hitta rätt analysdata. Arbetet med att hitta och analysera data är en iterativ problemlösningsprocess för att få fram det önskade resultatet. Visualiseringar kan fungera som ett verktyg i arbetet och stödja användares beslutsfattande. Denna kvalitativa fallstudie syftar till att undersöka hur visualisering kan användas som ett stöd för marknads- och kreditanalytikers datalagerarbete. Studien använde användarcentrerade metoder för att undersöka analytikers arbete i ett datalager. Femton kunskapsuppgifter identifierades som mål för visualiseringsstöd i analytikers datalagerarbete. Ett analysorienterat och ett systemorienterat strukturförslag för visualiseringar värderades med dessa kunskapsuppgifter som viktade mål. Av kunskapsuppgifterna är den viktigaste att koppla analysuppgifter till systemstruktur. Det kräver att visualiseringsstödet erbjuder en analysorienterad struktur initialt och blir alltmer systemorienterat i takt med att den intressanta informationsmängden definieras. Användarcentrerade metoder användes för att identifiera kunskapsuppgifter. Studien visar att dessa kunskapsuppgifter kan användas som designmål för värdering av visualiseringsstöd.
457

Um framework de testes unitários para procedimentos de carga em ambientes de business intelligence

Santos, Igor Peterson Oliveira 30 August 2016 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Business Intelligence (BI) relies on Data Warehouse (DW), a historical data repository designed to support the decision making process. Despite the potential benefits of a DW, data quality issues prevent users from realizing the benefits of a BI environment and Data Analytics. Problems related to data quality can arise in any stage of the ETL (Extract, Transform and Load) process, especially in the loading phase. This thesis presents an approach to automate the selection and execution of previously identified test cases for loading procedures in BI environments and Data Analytics based on DW. To verify and validate the approach, a unit test framework was developed. The overall goal is achieve data quality improvement. The specific aim is reduce test effort and, consequently, promote test activities in DW process. The experimental evaluation was performed by two controlled experiments in the industry. The first one was carried out to investigate the adequacy of the proposed method for DW procedures development. The Second one was carried out to investigate the adequacy of the proposed method against a generic framework for DW procedures development. Both results showed that our approach clearly reduces test effort and coding errors during the testing phase in decision support environments. / A qualidade de um produto de software está diretamente relacionada com os testes empregados durante o seu desenvolvimento. Embora os processos de testes para softwares aplicativos e sistemas transacionais já apresentem um alto grau de maturidade, estes devem ser investigados para os processos de testes em um ambiente de Business Intelligence (BI) e Data Analytics. As diferenças deste ambiente em relação aos demais tipos de sistemas fazem com que os processos e ferramentas de testes existentes precisem ser ajustados a uma nova realidade. Neste contexto, grande parte das aplicações de Business Intelligence (BI) efetivas depende de um Data Warehouse (DW), um repositório histórico de dados projetado para dar suporte a processos de tomada de decisão. São as cargas de dados para o DW que merecem atenção especial relativa aos testes, por englobar procedimentos críticos em relação à qualidade. Este trabalho propõe uma abordagem de testes, baseada em um framework de testes unitários, para procedimentos de carga em um ambiente de BI e Data Analytics. O framework proposto, com base em metadados sobre as rotinas de carga, realiza a execução automática de casos de testes, por meio da geração de estados iniciais e a análise dos estados finais, bem como seleciona os casos de testes a serem aplicados. O objetivo é melhorar a qualidade dos procedimentos de carga de dados e reduzir o tempo empregado no processo de testes. A avaliação experimental foi realizada através de dois experimentos controlados executados na indústria. O primeiro avaliou a utilização de casos de testes para as rotinas de carga, comparando a efetividade do framework com uma abordagem manual. O segundo experimento efetuou uma comparação com um framework genérico e similar do mercado. Os resultados indicaram que o framework pode contribuir para o aumento da produtividade e redução dos erros de codificação durante a fase de testes em ambientes de suporte à decisão.
458

Uma abordagem para automatizar a manutenção do código de procedimentos de carga para ambientes de business intelligence

Costa, Juli Kelle Góis 27 August 2015 (has links)
Business Intelligence (BI) relies on Data Warehouse (DW), a historical data repository designed to support the decision making process. Without an effective Data Warehouse, organizations cannot extract the data required for information analysis in time to enable more effective strategic, tactical, and operational insights. This thesis presents an approach and a Rapid Application Development (RAD) tool to increase efficiency and effectiveness of ETL (Extract, Transform and Load) programs creation and maintenance. Experiment evaluation of the approach is carried out in two controlled experiments that carefully evaluated the efficiency and effectiveness of the tool in an industrial setting. The results indicate that our approach can indeed be used as method aimed at improving creation and maintenance of ETL processes. / Grande parte das aplicações de Business Intelligence (BI) efetivas depende de um Data Warehouse (DW), um repositório histórico de dados projetado para dar suporte a processos de tomada de decisão. Sem um DW eficiente, as organizações tendem a não extrair, em um tempo aceitável, os dados que viabilizam ações estratégicas, táticas e operacionais mais eficazes. Muitos ambientes de BI possuem um processo de Engenharia de Software particular, baseado em dados, para desenvolver programas de Extração, Transformação e Carga (ETL) de dados para o DW. Este trabalho propõe o desenvolvimento e experimentação de uma abordagem de Desenvolvimento Rápido de Aplicações (RAD) para aumentar a eficácia e a eficiência da manutenção de procedimentos de carga SQL, utilizados em processos ETL, avaliando a relação existente entre a sua utilização e a qualidade dos dados que são movidos, gerados e atualizados durante o processo de povoamento de um Data Warehouse. Este é um ambiente ímpar que necessita de maior integração e interdisciplinaridade entre as áreas de Engenharia de Software (ES) e Banco de Dados. Foi feita uma avaliação da criação e manutenção automática de procedimentos em extensões da SQL, perfazendo dois experimentos controlados feitos na indústria, para analisar a efetividade de uma ferramenta que encapsula e automatiza parte da abordagem. Os resultados indicaram que a nossa abordagem pode ser usada como método para acelerar e melhorar o desenvolvimento e manutenção de processos ETL.
459

OLAP query optimization and result visualization / Optimisation de requêtes OLAP et visualisation des résultats

Simonenko, Ekaterina 16 September 2011 (has links)
Nous explorons différents aspects des entrepôts de données et d’OLAP, le point commun de nos recherches étant le modèle fonctionnel pour l'analyse de données. Notre objectif principal est d'utiliser ce modèle dans l'étude de trois aspects différents, mais liés:- l'optimisation de requêtes par réécriture et la gestion du cache,- la visualisation du résultat d'une requête OLAP,- le mapping d'un schéma relationnel en BCNF vers un schéma fonctionnel. L'optimisation de requêtes et la gestion de cache sont des problèmes cruciaux dans l'évaluation de requêtes en général, et les entrepôts de données en particulier; et la réécriture de requêtes est une des techniques de base pour l'optimisation de requêtes. Nous établissons des conditions d'implication de requêtes analytiques, en utilisant le pré-ordre partiel sur l'ensemble de requêtes, et nous définissons un algorithme sain et complet de réécriture ainsi que une stratégie de gestion de cache optimisée, tous les deux basés sur le modèle fonctionnel.Le deuxième aspect important que nous explorons dans cette thèse est celui de la visualisation du résultat. Nous démontrons l'importance pour la visualisation de reproduire des propriétés essentielles de données qui sont les dépendances fonctionnelles. Nous montrons que la connexion, existante entre les données et leur visualisation, est précisément la connexion entre leurs représentations fonctionnelles. Nous dérivons alors un cadre technique, ayant pour objectif d'établir une telle connexion pour un ensemble de données et un ensemble de visualisations. En plus d'analyse du processus de visualisation, nous utilisons le modèle fonctionnel comme un guide pour la visualisation interactive, et définissons ce qu'on appelle la visualisation paramétrique. Le troisième aspect important de notre travail est l'expérimentation des résultats obtenus dans cette thèse. Les résultats de cette thèse peuvent être utilisés afin d’analyser les données contenues dans une table en Boyce-Codd Normal Form (BCNF), étant donné que le schéma de la table peut être transformé aisément en un schéma fonctionnel. Nous présentons une telle transformation (mapping) dans cette thèse. Une fois le schéma relationnel transformé en un schéma fonctionnel, nous pouvons profiter des résultats sur l'optimisation et la visualisation de requêtes. Nous avons utilisé cette transformation dans l’implémentation de deux prototypes dans le cadre de deux projets différents. / In this thesis, we explore different aspects of Data Warehousing and OLAP, the common point of our proposals being the functional model for data analysis. Our main objective is to use that model in studying three different, but related aspects:- query optimization through rewriting and cache management,- query result visualization,- mapping of a relational BCNF schema to a functional schema.Query optimization and cache management is a crucial issue in query processing in general, and in data warehousing in particular; and query rewriting is one of the basic techniques for query optimization. We establish derivability conditions for analytic functional queries, using a partial pre-order over the set of queries. Then we provide a sound and complete rewriting algorithm, as well as an optimized cache management strategy, both based on the underlying functional model.A second important aspect that we explore in the thesis is that of query result visualization. We show the importance for the visualization to reflect such essential features of the dataset as functional dependencies. We show that the connection existing between data and visualization is precisely the connection between their functional representations. We then define a framework, whose objective is to establish such a connection for a given dataset and a set of visualizations. In addition to the analysis of the visualization process, we use the functional data model as a guide for interactive visualization, and define what we call a parametric visualization. A third important aspect of our work is experimentation with the results obtained in the thesis. In order to be able to analyze the data contained in a Boyce-Codd Normal Form (BCNF) table, one can use the results obtained in this thesis, provided that the schema of the table can be mapped to a functional schema. We present such a mapping in this thesis. Once the relational schema has been transformed into a functional schema, we can take advantage of the query optimization and result visualization results presented in the thesis. We have used this transformation in the implementation of two prototypes in the context of two different projects.
460

Datový sklad pro analýzu územněsprávních celků / Applying Business Intelligence tools for the Financial and Property Analysis of Municipalities

Horký, Martin January 2008 (has links)
This master thesis deals with the software support of one Financial and Property Analysis of Municipalities (FAMA). The co-author of this method is Doctor Petr Toth from the Institute of Public Administration and Regional Development at University of Economics in Prague. The method is supported by Microsoft Access database application and two C++ applications ArisDestiller and Prognoza at present time. The aim of this thesis is to evaluate the present software support of Financial and Property Analysis and implement an alternative solution based on Business Intelligence tools. Chosen part of analysis is implemented in the pilot project using Microsoft SQL Server 2005 environment. The thesis focuses mainly on using integration tools for XML data. The contribution of the thesis is to implement data warehouse, which uses the municipalities'annual bills in XML format from the database of Ministry of Finance of the Czech Republic. The Financial and Property Analysis method and the current software support are discussed in the first half of this thesis. The other half of the work is dedicated to the Business Intelligence pilot project. Both means of software support are reviewed and compared from various perspectives in the conclusion.

Page generated in 0.0514 seconds