• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 8
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Advanced Analytics in Retail Banking in the Czech Republic / Prediktívna analytika v retailovom bankovníctve v Českej republike

Búza, Ján January 2014 (has links)
Advanced analytics and big data allow a more complete picture of customers' preferences and demands. Through this deeper understanding, organizations of all types are finding new ways to engage with existing or potential customers. Research shows that companies using big data and advanced analytics in their operations have productivity and profitability rates that are 5 to 6 percent higher compared to their peers. At the same time it is almost impossible to find a banking institution in the Czech Republic exploiting potential of data analytics to its full extent. This thesis will therefore focus on exploring opportunities for banks applicable in the local context, taking into account technological and financial limitations as well as the market situation. Author will conduct interviews with bank managers and management consultants familiar with the topic in order to evaluate theoretical concepts and the best practices from around the world from the point of Czech market environment, to assess capability of local banks to exploit them and identify the main obstacles that stand in the way. Based on that a general framework for bank managers, who would like to use advanced analytics, will be proposed.
12

Det binära guldet : en uppsats om big data och analytics

Hellström, Elin, Hemlin, My January 2013 (has links)
Syftet med denna studie är att utreda begreppen big data och analytics. Utifrån vetenskapliga teorier om begreppen undersöks hur konsultföretag uppfattar och använder sig av big data och analytics. För att skapa en nyanserad bild har även en organisation inom vården undersökts för att få kunskap om hur de kan dra nytta av big data och analytics. Ett antal viktiga svårigheter och framgångsfaktorer kopplade till båda begreppen presenteras. De svårigheterna kopplas sedan ihop med en framgångsfaktor som anses kunna bidra till att lösa det problemet. De mest relevanta framgångsfaktorer som identifierats är att högkvalitativ data finns tillgänglig men även kunskap och kompetens kring hur man hanterar data. Slutligen tydliggörs begreppens innebörd där man kan se att big data oftast beskrivs ur dimensionerna volym, variation och hastighet och att analytics i de flesta fall syftar till att deskriptiv och preventiv analys genomförs. / The purpose of this study is to investigate the concepts of big data and analytics. The concepts are explored based on scientific theories and interviews with consulting firms. A healthcare organization has also been interviewed to get a richer understanding of how big data and analytics can be used to gain insights and how an organisation can benefit from them. A number of important difficulties and sucess facors connected to the concepts are presented. These difficulties are then linked to a sucess factor that is considered to solve the problem. The most relevant success factors identified are the avaliability of high quality data and knowledge and expertise on how to handle the data. Finally the concepts are clarified and one can see that big data is usually described from the dimensions volume, variety and velocity and analytics is usually described as descriptive and preventive analysis.
13

Fostering the effectiveness of reportable arrangements provisions by enhancing digitalisation at the South African Revenue Service

Heydenrych, Christine January 2020 (has links)
Maladministration at the South African Revenue Service (SARS) resulted in the loss of public trust and negative implications on voluntary tax compliance and may encourage taxpayers to partake in aggressive tax planning schemes. This maladministration also resulted in the degeneration of SARS systems whilst technology advanced internationally. Digitalisation at SARS is crucial to address aggressive tax planning that has become more advanced as a result of the mobility of the digital economy. This study used a qualitative research methodology based on exploratory research which involved literature reviews of textbooks and articles in order to provide recommendations of how digitalisation can be adopted by SARS with a specific focus on ensuring the effectiveness of the South African Reportable Arrangements legislation. The operation of the South African Reportable Arrangements legislation was explained in order to benchmark it against the design features and best practices recommended by the OECD in Action 12 of the BEPS project and to highlight how digitalisation can enhance these provisions. Recommendations made considered the current state of digitalisation at SARS, how other countries’ tax administrations have become more digitalised and practical concerns to be borne in mind when deciding the appropriate technology. The study found that there are a handful of recommendations remaining on how South Africa could improve reportable arrangement legislation without unnecessarily increasing the compliance burden. Digitalisation techniques that could be considered are advanced analytics, artificial intelligence, blockchain technology and Application Programme Interfaces. The study proposed, amongst others, that these could be adopted by SARS to be able to gather information from various sources in real time to identify further characteristics of aggressive tax planning, perform completeness checks on reported transactions and re-deploy resources to investigate pre-identified possible reportable transactions. / Mini Dissertation (MPhil (International Taxation))--University of Pretoria, 2020. / pt2021 / Taxation / MPhil (International Taxation) / Unrestricted
14

KPI Intelligence

Burdensky, Daniel 02 January 2023 (has links)
Zur Steuerung großer Produktionsunternehmen muss das Management Alternativen bewerten, fundierte Entscheidungen herbeiführen und Maßnahmen zur Prozesssteuerung ergreifen. Jedoch macht die Komplexität der Ursache-Wirkbeziehungen zwischen den Kennzahlen, die zur Steuerung der interdependenten Geschäftsprozesse dienen, eine Abschätzung der Auswirkungen von Maßnahmen für den Menschen ohne Hilfsmittel nahezu unmöglich. Die vorliegende Dissertation liefert daher Ansätze für die Weiterentwicklung der klassischen kennzahlenbasierten Unternehmenssteuerung durch die Einbeziehung von Advanced Analytics Algorithmik für die Analyse von Prozessinterdependenzen. Damit begegnet sie der identifizierten Lücke einer unzureichenden Integration von Advanced Analytics in die praktische Prozesssteuerung. Die entwickelte Methode beinhaltet eine Vorgehensweise und Prozessmodelle für die quantitative Analyse der Interdependenzen. Zudem bietet sie Vorschläge für den komplementären und komparativen Einsatz fortgeschrittener Analysealgorithmik. Zur Beurteilung der Eignung der jeweiligen Lösung dienen Leitmerkmale, welche die Bedürfnisse von Anwendern repräsentieren. Die Analyseergebnisse sind schließlich auf ihren Beitrag zu den in der Dissertation erarbeiteten Anforderungen an die kennzahlenbasierte Unternehmenssteuerung zu prüfen. Die Entwicklung und Evaluierung der Methode erfolgt im Rahmen einer Fallstudie mit mehreren heterogenen Anwendungsfällen bei einem OEM der Automobilindustrie. Die Forschungsergebnisse tragen dazu bei, die Vielzahl digitaler Lösungen zur Visualisierung von Daten mit dem größer werdenden Angebot an fortgeschrittenen Analysemöglichkeiten zur Unterstützung kennzahlenbasierter Managementprozesse zusammenzuführen. Dem Grundgedanken der Kybernetik folgend, ermöglicht die Methode Fachanwendern die selbständige quantitative Analyse von Ursache-Wirkbeziehungen zwischen Kennzahlen innerhalb einzelner Prozesse sowie prozess- und hierarchieübergreifend. Die Interpretation der Ergebnisse dient sodann der Ergänzung von deren implizitem Wissen und folglich einer effektiveren und effizienteren Prozesssteuerung.
15

Vysoce výkonné analýzy / High Performance Analytics

Kalický, Andrej January 2013 (has links)
This thesis explains Big Data Phenomenon, which is characterised by rapid growth of volume, variety and velocity of data - information assets, and thrives the paradigm shift in analytical data processing. Thesis aims to provide summary and overview with complete and consistent image about the area of High Performance Analytics (HPA), including problems and challenges on the pioneering state-of-art of advanced analytics. Overview of HPA introduces classification, characteristics and advantages of specific HPA method utilising the various combination of system resources. In the practical part of the thesis the experimental assignment focuses on analytical processing of large dataset using analytical platform from SAS Institute. The experiment demonstrates the convenience and benefits of In-Memory Analytics (specific HPA method) by evaluating the performance of different analytical scenarios and operations. Powered by TCPDF (www.tcpdf.org)
16

Revealing the Non-technical Side of Big Data Analytics : Evidence from Born analyticals and Big intelligent firms

Denadija, Feda, Löfgren, David January 2016 (has links)
This study aspired to gain a more a nuanced understanding of the emerging analytics technologies and the vital capabilities that ultimately drive evidence-based decision making. Big data technology is widely discussed by varying groups in society and believed to revolutionize corporate decision making. In spite of big data's promising possibilities only a trivial fraction of firms deploying big data analytics (BDA) have gained significant benefits from their initiatives. Trying to explain this inability we leaned back on prior IT literature suggesting that IT resources can only be successfully deployed when combined with organizational capabilities. We identified key theoretical components at an organizational, relational, and human level. The data collection included 20 interviews with decision makers and data scientist from four analytical leaders. Early on we distinguished the companies into two categories based on their empirical characteristics. The terms “Born analyticals” and “Big intelligent firms” were coined. The analysis concluded that social, non-technical elements play a crucial role in building BDA abilities. These capabilities differ among companies but can still enable BDA in different ways, indicating that organizations´ history and context seem to influence how firms deploy capabilities. Some capabilities have proven to be more important than others. The individual mindset towards data is seemingly the most determining capability in building BDA ability. Varying mindsets foster different BDA-environments in which other capabilities behave accordingly. Born analyticals seemed to display an environment benefitting evidence based decisions.
17

Forecasting in Database Systems

Fischer, Ulrike 07 February 2014 (has links) (PDF)
Time series forecasting is a fundamental prerequisite for decision-making processes and crucial in a number of domains such as production planning and energy load balancing. In the past, forecasting was often performed by statistical experts in dedicated software environments outside of current database systems. However, forecasts are increasingly required by non-expert users or have to be computed fully automatically without any human intervention. Furthermore, we can observe an ever increasing data volume and the need for accurate and timely forecasts over large multi-dimensional data sets. As most data subject to analysis is stored in database management systems, a rising trend addresses the integration of forecasting inside a DBMS. Yet, many existing approaches follow a black-box style and try to keep changes to the database system as minimal as possible. While such approaches are more general and easier to realize, they miss significant opportunities for improved performance and usability. In this thesis, we introduce a novel approach that seamlessly integrates time series forecasting into a traditional database management system. In contrast to flash-back queries that allow a view on the data in the past, we have developed a Flash-Forward Database System (F2DB) that provides a view on the data in the future. It supports a new query type - a forecast query - that enables forecasting of time series data and is automatically and transparently processed by the core engine of an existing DBMS. We discuss necessary extensions to the parser, optimizer, and executor of a traditional DBMS. We furthermore introduce various optimization techniques for three different types of forecast queries: ad-hoc queries, recurring queries, and continuous queries. First, we ease the expensive model creation step of ad-hoc forecast queries by reducing the amount of processed data with traditional sampling techniques. Second, we decrease the runtime of recurring forecast queries by materializing models in a specialized index structure. However, a large number of time series as well as high model creation and maintenance costs require a careful selection of such models. Therefore, we propose a model configuration advisor that determines a set of forecast models for a given query workload and multi-dimensional data set. Finally, we extend forecast queries with continuous aspects allowing an application to register a query once at our system. As new time series values arrive, we send notifications to the application based on predefined time and accuracy constraints. All of our optimization approaches intend to increase the efficiency of forecast queries while ensuring high forecast accuracy.
18

Forecasting in Database Systems

Fischer, Ulrike 18 December 2013 (has links)
Time series forecasting is a fundamental prerequisite for decision-making processes and crucial in a number of domains such as production planning and energy load balancing. In the past, forecasting was often performed by statistical experts in dedicated software environments outside of current database systems. However, forecasts are increasingly required by non-expert users or have to be computed fully automatically without any human intervention. Furthermore, we can observe an ever increasing data volume and the need for accurate and timely forecasts over large multi-dimensional data sets. As most data subject to analysis is stored in database management systems, a rising trend addresses the integration of forecasting inside a DBMS. Yet, many existing approaches follow a black-box style and try to keep changes to the database system as minimal as possible. While such approaches are more general and easier to realize, they miss significant opportunities for improved performance and usability. In this thesis, we introduce a novel approach that seamlessly integrates time series forecasting into a traditional database management system. In contrast to flash-back queries that allow a view on the data in the past, we have developed a Flash-Forward Database System (F2DB) that provides a view on the data in the future. It supports a new query type - a forecast query - that enables forecasting of time series data and is automatically and transparently processed by the core engine of an existing DBMS. We discuss necessary extensions to the parser, optimizer, and executor of a traditional DBMS. We furthermore introduce various optimization techniques for three different types of forecast queries: ad-hoc queries, recurring queries, and continuous queries. First, we ease the expensive model creation step of ad-hoc forecast queries by reducing the amount of processed data with traditional sampling techniques. Second, we decrease the runtime of recurring forecast queries by materializing models in a specialized index structure. However, a large number of time series as well as high model creation and maintenance costs require a careful selection of such models. Therefore, we propose a model configuration advisor that determines a set of forecast models for a given query workload and multi-dimensional data set. Finally, we extend forecast queries with continuous aspects allowing an application to register a query once at our system. As new time series values arrive, we send notifications to the application based on predefined time and accuracy constraints. All of our optimization approaches intend to increase the efficiency of forecast queries while ensuring high forecast accuracy.

Page generated in 0.0731 seconds