31 |
Návrh metodiky testování BI řešení / Design of methodology for BI solutions testingJakubičková, Nela January 2011 (has links)
This thesis deals with Business Intelligence and its testing. It seeks to highlight the differences from the classical software testing and finally design a methodology for BI solutions testing that could be used in practice on real projects of BI companies. The aim of thesis is to design a methodology for BI solutions testing based on theoretical knowledge of Business Intelligence and software testing with an emphasis on the specific BI characteristics and requirements and also in accordance with Clever Decision's requirements and test it in practice on a real project in this company. The paper is written up on the basis of studying literature in the field of Business Intelligence and software testing from Czech and foreign sources as well as on the recommendations and experience of Clever Decision's employees. It is one of the few if not the first sources dealing with methodology for BI solutions testing in the Czech language. This work could also serve as a basis for more comprehensive methodologies of BI solutions testing. The thesis can be divided into theoretical and practical part. The theoretical part tries to explain the purpose of Business Intelligence use in enterprises. It elucidates particular components of the BI solution, then the actual software testing, various types of tests, with emphasis on the differences and specificities of Business Intelligence. The theoretical part is followed by designed methodology for BI solutions using a generic model for the BI/DW solution testing. The practical part's highlight is the description of real BI project testing in Clever Decision according to the designed methodology.
|
32 |
Měření výkonnosti podniku / Corporate Performance MeasurementPavlová, Petra January 2012 (has links)
This thesis deals with the application of Business Intelligence (BI) to support the corporate performance management in ISS Europe, spol. s r. o. This company provides licences and implements original software products as well as third-party software products. First, an analysis is conducted in the given company, which then serves as basis for the implementation of the BI solution that should be interconnected with the company strategies. The main goal is the implementation of a pilot BI solution to aid the monitoring and optimisation of corporate performance. Among secondary goals are the analysis of related concepts, business strategy analysis, strategic goals and systems identification and the proposition and implementation of a pilot BI solution. In its theoretical part, this thesis focuses on the analysis of concepts related to corporate performance and BI implementations and shortly describes the company together with its business strategy. The following practical part is based on the theoretical findings. An analysis of the company is carried out using the Balanced Scorecard (BSC) methodology, the result of which is depicted in a strategic map. This methodology is then supplemented by the Activity Based Costing (ABC) analytical method, which divides expenses according to assets. The results are informational data about which expenses are linked to handling individual developmental, implementational and operational demands for particular contracts. This is followed by an original proposition and the implementation of a BI solution which includes the creation of a Data Warehouse (DWH), designing Extract Transform and Load (ETL) and Online Analytical Processing (OLAP) systems and generating sample reports. The main contribution of this thesis is in providing the company management with an analysis of company data using a multidimensional perspective which can be used as basis for prompt and correct decision-making, realistic planning and performance and product optimisation.
|
33 |
Data marts as management information delivery mechanisms: utilisation in manufacturing organisations with third party distributionPonelis, S.R. (Shana Rachel) 06 August 2003 (has links)
Customer knowledge plays a vital part in organisations today, particularly in sales and marketing processes, where customers can either be channel partners or final consumers. Managing customer data and/or information across business units, departments, and functions is vital. Frequently, channel partners gather and capture data about downstream customers and consumers that organisations further upstream in the channel require to be incorporated into their information systems in order to allow for management information delivery to their users. In this study, the focus is placed on manufacturing organisations using third party distribution since the flow of information between channel partner organisations in a supply chain (in contrast to the flow of products) provides an important link between organisations and increasingly represents a source of competitive advantage in the marketplace. The purpose of this study is to determine whether there is a significant difference in the use of sales and marketing data marts as management information delivery mechanisms in manufacturing organisations in different industries, particularly the pharmaceuticals and branded consumer products. The case studies presented in this dissertation indicates that there are significant differences between the use of sales and marketing data marts in different manufacturing industries, which can be ascribed to the industry, both directly and indirectly. / Thesis (MIS(Information Science))--University of Pretoria, 2002. / Information Science / MIS / unrestricted
|
34 |
AL: Unified Analytics in Domain Specific TermsLuong, Johannes, Habich, Dirk, Lehner, Wolfgang 13 June 2022 (has links)
Data driven organizations gather information on various aspects of their endeavours and analyze that information to gain valuable insights or to increase automatization. Today, these organizations can choose from a wealth of specialized analytical libraries and platforms to meet their functional and non-functional requirements. Indeed, many common application scenarios involve the combination of multiple such libraries and platforms in order to provide a holistic perspective. Due to the scattered landscape of specialized analytical tools, this integration can result in complex and hard to evolve applications. In addition, the necessary movement of data between tools and formats can introduce a serious performance penalty. In this article we present a unified programming environment for analytical applications. The environment includes AL, a programming language that combines concepts of various common analytical domains. Further, the environment also includes a flexible compilation system that uses a language-, domain-, and platform independent program intermediate representation to separate high level application logic and physical organisation. We provide a detailed introduction of AL, establish our program intermediate representation as a generally useful abstraction, and give a detailed explanation of the translation of AL programs into workloads for our experimental shared-memory processing engine.
|
35 |
Grafický podsystém v prostředí internetového prohlížeče / Graphs Subsystem in Internet Browser EnvironmentVlach, Petr January 2008 (has links)
This master's thesis, divided into two sections, compares in part one existing (non-)commercial systems for OLAP presentation using contingency table or graph. The main focus is put on a graph. Results received from my observations in part one are used for implementing a graphic subsystem within internet browser's environment. User friendly interface and good arrangement of displayed data are the most important tasks.
|
36 |
Geovisualização analítica: desenvolvimento de um protótipo de um sistema analítico de informações para a gestão da coleta seletiva de resíduos urbanos recicláveis. / Analytical geovisualization: development of a prototype of an analytical information system for the management of the selective collection of recycled urban solid waste.Hernández Simões, Carlos Enrique 16 April 2010 (has links)
Os resíduos urbanos descartados de forma irregular constituem um problema sério, principalmente nas grandes cidades, causando entupimento de bueiros e drenagem com conseqüentes inundações, sujeira e transmissão de doenças tais como leptospirose e dengue além de ser um estorvo para o trânsito e acarretarem gastos para a Prefeitura. Por outro lado, reciclar este material é uma fonte de receitas e um gerador de empregos. A Geovisualização Analítica pode ser de grande auxílio para a análise desse problema complexo e para a tomada de decisão. Com essa motivação, a presente dissertação procura fornecer uma visão geral sobre o estado da arte quanto a conceitos e pesquisas em Geovisualização (GVis) e Processamento Analítico (OLAP e SOLAP). Apresenta também processos, metodologias e tecnologias que foram utilizadas no desenvolvimento de um protótipo de um sistema analítico de informações aplicado à área de gestão da coleta seletiva de resíduos sólidos recicláveis em ambiente urbano. Este protótipo, efetivamente implantado, combinou navegação e consulta para recuperar informações através de seleções espaciais e alfanuméricas. Através de exemplos foi mostrado que este tipo de solução auxilia gestores nos processos de exploração, análise e tomada de decisão. / Domestic waste illegally disposed constitute a serious problem, especially in large cities, causing clogging of drains and drainage with subsequent flooding, dirt and transmission of diseases such as leptospirosis and dengue as well as being a hindrance to traffic and would entail city hall costs. Moreover, recycle this material is a source of revenue and a generator of jobs. The Analytical Geovisualization can be of great help in the analysis of this complex problem and the decision-making. With this motivation, this paper seeks to provide an overview of the state of the art as the concepts and research in Geovisualization (GVis) and Analytical Processing (OLAP and SOLAP). It also presents processes, methodologies and technologies that were used in developing a prototype of an analytical system applied to the area of selective collection information management of recyclable solid waste in urban environments. This prototype, effectively deployed, combined navigation and query to retrieve information through spatial and alphanumeric selections. Through examples it was shown that this type of solution helps managers in operating procedures, analysis and decision making.
|
37 |
Real-time Business Intelligence through Compact and Efficient Query Processing Under UpdatesIdris, Muhammad 05 March 2019 (has links) (PDF)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
38 |
應用商業智慧之研究:安裝企業管理儀表板之個案分析 / The critical success factors of instalment of a Business Intelligence system--A case study of the Optronics Device Industry陳仁嘉, Chen,akagrant Unknown Date (has links)
商業智慧是近年來企業邁向成功重要的工具,在競爭劇烈的市場上採用商業智慧來作業其管理效果卓越。 企業除了提高產能、準時交貨率、服務品質、客戶滿意度等競爭優勢等;另外降低不良率、減少應收帳款周轉天數、減低庫存周轉率、重工率、報廢率、退貨率、客戶抱怨、成本….等。
本文就是利用商業智慧 (Business Intelligence)的企業管理儀表板 (EMD – Enterprise Management Dashboard)的一個成功的案例,例如:單一畫面全方位管理、多維度互動式 (Multi-Dimensional)、相關性快速分析、交叉分析 (Slice & Dice)、穿透鑽取 (Drill-Through)、上捲下鑽 (Drill-Down & Drill-Up)及線上分析 (OLAP – On Line Analytical Processing)引擎,提供各級主管即時的資訊作為管理工具,即時自動轉為視覺化資訊 (Visualizer Web) – 協助企業快速透視整體營運狀況,進而採取最佳行動方案快速超過對手,創造出良好的營運績效為企業帶來可觀的收益與利潤。
|
39 |
A data management and analytic model for business intelligence applicationsBanda, Misheck 05 1900 (has links)
Most organisations use several data management and business intelligence solutions which are on-premise and, or cloud-based to manage and analyse their constantly growing business data. Challenges faced by organisations nowadays include, but are not limited to growth limitations, big data, inadequate analytics, computing, and data storage capabilities. Although these organisations are able to generate reports and dashboards for decision-making in most cases, effective use of their business data and an appropriate business intelligence solution could achieve and retain informed decision-making and allow competitive reaction to the dynamic external environment. A data management and analytic model has been proposed on which organisations could rely for decisive guidance when planning to procure and implement a unified business intelligence solution. To achieve a sound model, literature was reviewed by extensively studying business intelligence in general, and exploring and developing various deployment models and architectures consisting of naïve, on-premise, and cloud-based which revealed their benefits and challenges. The outcome of the literature review was the development of a hybrid business intelligence model and the accompanying architecture as the main contribution to the study.In order to assess the state of business intelligence utilisation, and to validate and improve the proposed architecture, two case studies targeting users and experts were conducted using quantitative and qualitative approaches. The case studies found and established that a decision to procure and implement a successful business intelligence solution is based on a number of crucial elements, such as, applications, devices, tools, business intelligence services, data management and infrastructure. The findings further recognised that the proposed hybrid architecture is the solution for managing complex organisations with serious data challenges. / Computing / M. Sc. (Computing)
|
40 |
Istar : um esquema estrela otimizado para Image Data Warehouses baseado em similaridadeAnibal, Luana Peixoto 26 August 2011 (has links)
Made available in DSpace on 2016-06-02T19:05:54Z (GMT). No. of bitstreams: 1
3993.pdf: 3294402 bytes, checksum: 982c043143364db53c8a4e2084205995 (MD5)
Previous issue date: 2011-08-26 / A data warehousing environment supports the decision-making process through the investigation and analysis of data in an organized and agile way. However, the current
data warehousing technologies do not allow that the decision-making processe be carried out based on images pictorial (intrinsic) features. This analysis can not be carried out in a
conventional data warehousing because it requires the management of data related to the intrinsic features of the images to perform similarity comparisons. In this work, we
propose a new data warehousing environment called iCube to enable the processing of OLAP perceptual similarity queries over images, based on their pictorial (intrinsic) features. Our approach deals with and extends the three main phases of the traditional data warehousing process to allow the use of images as data. For the data integration phase, or ETL phase, we propose a process to represent the image by its intrinsic
content (such as color or texture numerical descriptors) and integrate this data with conventional data in the DW. For the dimensional modeling phase, we propose a star schema, called iStar, that stores both the intrinsic and the conventional image data. Moreover, at this stage, our approach models the schema to represent and support the use of different user-defined perceptual layers. For the data analysis phase, we propose an environment in which the OLAP engine uses the image similarity as a query predicate. This environment employs a filter mechanism to speed-up the query execution. The iStar was validated through performance tests for evaluating both the building cost and the cost to process IOLAP queries. The results showed that our approach provided an impressive performance improvement in IOLAP query processing. The performance gain of the iCube over the best related work (i.e. SingleOnion) was up to 98,21%. / Um ambiente de data warehousing (DWing) auxilia seus usuários a tomarem decisões a partir de investigações e análises dos dados de maneira organizada e ágil. Entretanto, os atuais recursos de DWing não possibilitam que o processo de tomada de decisão seja realizado com base em comparações do conteúdo intrínseco de imagens. Esta análise
não pode ser realizada por aplicações de DW convencionais porque essa utiliza, como base, imagens digitais e necessita realizar operações baseadas em similaridade, para as
quais um DW convencional não oferece suporte. Neste trabalho, é proposto um ambiente de data warehouse chamado iCube que provê suporte ao processamento de consultas IOLAP (Image On-Line Analytical Processing) baseadas em diversas percepções de similaridade entre as imagens. O iCube realiza adaptações nas três principais fases de um ambiente de data warehousing convencional para permitir o uso de imagens como dados de um data warehouse (DW). Para a fase de integração, ou fase ETL (Extract, Trasnform and Load), nós propomos um processo para representar as imagens a partir de seu conteúdo intrínseco (i.e., por exemplo por meio de descritores numéricos que
representam cor ou textura dessas imagens) e integrar esse conteúdo intrínseco a dados convencionais em um DW. Neste trabalho, nós também propomos um esquema estrela
otimizado para o iCube, denominado iStar, que armazena tanto dados convencionais quanto dados de representação do conteúdo intrínseco das imagens. Ademais, nesta fase, o iStar foi projetado para representar e prover suporte ao uso de diferentes camadas perceptuais definidas pelo usuário. Para a fase de análise de dados, o iCube permite que processos OLAP sejam executados com o uso de comparações de similaridade como predicado de consultas e com o uso de mecanismos de filtragem para acelerar o processamento de consultas OLAP. O iCube foi validado a partir de testes de
desempenho para a construção da estrutura e para o processamento de consultas IOLAP. Os resultados demonstraram que o iCube melhora significativamente o
desempenho no processamento de consultas IOLAP quando comparado aos atuais recursos de IDWing. Os ganhos de desempenho do iCube contra o melhor trabalho correlato (i.e. SingleOnion) foram de até 98,21%.
|
Page generated in 0.1312 seconds