331 |
A Cloud Based Platform for Big Data ScienceIslam, Md. Zahidul January 2014 (has links)
With the advent of cloud computing, resizable scalable infrastructures for data processing is now available to everyone. Software platforms and frameworks that support data intensive distributed applications such as Amazon Web Services and Apache Hadoop enable users to the necessary tools and infrastructure to work with thousands of scalable computers and process terabytes of data. However writing scalable applications that are run on top of these distributed frameworks is still a demanding and challenging task. The thesis aimed to advance the core scientific and technological means of managing, analyzing, visualizing, and extracting useful information from large data sets, collectively known as “big data”. The term “big-data” in this thesis refers to large, diverse, complex, longitudinal and/or distributed data sets generated from instruments, sensors, internet transactions, email, social networks, twitter streams, and/or all digital sources available today and in the future. We introduced architectures and concepts for implementing a cloud-based infrastructure for analyzing large volume of semi-structured and unstructured data. We built and evaluated an application prototype for collecting, organizing, processing, visualizing and analyzing data from the retail industry gathered from indoor navigation systems and social networks (Twitter, Facebook etc). Our finding was that developing large scale data analysis platform is often quite complex when there is an expectation that the processed data will grow continuously in future. The architecture varies depend on requirements. If we want to make a data warehouse and analyze the data afterwards (batch processing) the best choices will be Hadoop clusters and Pig or Hive. This architecture has been proven in Facebook and Yahoo for years. On the other hand, if the application involves real-time data analytics then the recommendation will be Hadoop clusters with Storm which has been successfully used in Twitter. After evaluating the developed prototype we introduced a new architecture which will be able to handle large scale batch and real-time data. We also proposed an upgrade of the existing prototype to handle real-time indoor navigation data.
|
332 |
Digital Transformation - Rollen för Big Data, Internet of Things och CloudComputing / Digital Transformation - The role of Big Data, Internet of Things and CloudComputingEide, Linn, Paredes Degollar, Daniel January 2016 (has links)
Förändringar i affärsvärlden och på marknaden leder till att företag idag ständigt ställs införnya utmaningar. Vi befinner oss i ett digitaliserat samhälle som är under ständig förändring.Det är något som företag måste förhålla sig till. Företagen står inför utmaningen attdigitalisera sin verksamhet för att förhålla sig till en förändrande marknad och allt merdigitaliserade kunder. Utökande av kommunikationsnät och infrastruktur har möjliggjort förkunder att verka i en mer uppkopplad miljö än tidigare. För att verksamheter ska vara med ispelet bland de konkurrenskraftiga organisationerna, leder det till en av anledningarna till attföretagen kommer i kontakt med IT-trender inom digitalisering som Big Data, Internet ofThings och Cloud Computing. Affärsmodellen är en huvudsaklig aspekt att ta hänsyn till ochjustera vid digitalisering. Studien syftar därmed till att undersöka rollen för de tre nämnda ITtrendernai en verksamhet och hur affärsmodellen faktiskt påverkas. Studiens slutsatser visarpå att användandet av de tre IT-trenderna är låg idag och vad gäller affärsmodellen har detskett en låg påverkan på även om medvetenhet finns i hur den bör påverkas. IT-trendernafinns dock ständigt på agendan och kommer spela en stor roll framöver inom verksamhetersamtidigt som affärsmodellen kommer få en större påverkan. / Changes in the business world and on the market means that companies today are constantlyfaced with new challenges. We are in a digitized society that is constantly changing. It issomething that companies must adhere to. The companies face the challenge of digitizing itsoperations to respond to a changing market and increasingly digitized customers. Extension ofnetworks and infrastructure has enabled customers to work in a more connected world thanbefore. For businesses to be included in the game among the competitive organizations, itleads to one of the reasons that companies will use the IT trends in digitization as Big Data,Internet of Things and Cloud Computing. The business model is a main aspect to be takeninto account and adjust for digitization. The study thus aims to examine the role of these threeIT trends in the business and how the business model actually is affected. The study's findingsshow that the use of the three IT trends are low today, and in terms of the business model hasbeen a low impact on even if consciousness is how it should be affected. The IT trends are,however, constantly on the agenda and will play a big role in the future in the activities whilethe business model will have a larger impact.
|
333 |
Big data : En studie om dess affärsnytta samt dess utmaningar och möjligheter, med fokus på detaljhandelnBergdahl, Jacob, Sinabian, Armine January 2015 (has links)
Idag skapas och lagras enorma mängder data, samtidigt som endast en liten del av datan analyseras och används. Big data är ett begrepp som cirkulerat i flera år, men på senare år har det fått allt större innebörd. Allt fler företag börjar få upp ögonen för big data, samtidigt som få verkligen vet hur det ska användas. Vissa frågar sig till och med: finns det någon affärsnytta? Med fokus på detaljhandelsbranschen undersöker vi huruvida det finns en affärsnytta med big data, och framförallt vilka utmaningar och möjligheter som finns kopplade till det. Begrepp som business intelligence och analytics diskuteras i sambandet. I denna kvalitativa studie har tre experter från olika företag; IBM, Knowit och Exor, intervjuats. Resultatet från intervjuerna har kopplats till den teori som tagits fram ur litteratur kring ämnet, och jämförts i analysen. Samtliga identifierade utmaningar och möjligheter har listats, och bland slutsatser ses att de etiska och mänskliga faktorerna har stor betydelse, och att affärsnyttan kan vara beroende av ett företags storlek och marknad. Uppsatsen är skriven på svenska. / Enormous amounts of data is created and stored today, all the while only a small amount of data is being analysed and used. Big data is a term that has circulated for years, however in recent years its meaning have been increased. More enterprises are starting to open their eyes for big data, while few understand how to actually use it. Some even ask themselves: is there a business benefit? With a focus on the retail industry, we examine whether there is a business benefit with using big data, and above all what challenges and opportunities are connected to it. Terms such as business intelligence and analytics are discussed in the relationship to big data. In this qualitative study, three experts from different enterprises; IBM, Knowit and Exor, have been interviewed. The results from the interviews has been connected to the theory from the literature around the subject, and has been compared in the analysis. All identified challenges and opportunities have been listed, and among the conclusions can the ethical and human factors be seen to have a major importance, and that the business benefits can be dependent of an enterprises’ size and market. The essay is written in Swedish.
|
334 |
La société connectée : contribution aux analyses sociologiques des liens entre technique et société à travers l'exemple des outils médiatiques numériques / The connected society : contribution to the sociological analyses between technology and society throught the example of the digital media toolsHuguet, Thibault 20 February 2017 (has links)
Initié depuis plusieurs décennies, le développement des techniques numériques marque de son empreinte profonde les esprits et les corps de nos sociétés contemporaines. Plus qu'un simple fait de société, il semble admis que nous assistons aujourd'hui à une véritable « mutation anthropologique ». Cependant, alors que les analyses des liens entre technique et société ont longtemps été marquées par des perspectives déterministes, nous proposons d'explorer dans cette thèse les relations dynamiques étroites qui font qu'une technique est éminemment sociale, et qu'une société est intrinsèquement technique. En adoptant un regard résolument compréhensif, cette recherche entend mettre en évidence les significations et les systèmes de sens qui entourent l'utilisation des outils médiatiques numériques, à une échelle macro-sociale et micro-sociale, pour expliquer causalement la place que nous accordons à cette catégorie spécifique d'objet. Les dynamiques à l’œuvre, tant à un niveau individuel que collectif, sont examinées de manière socio-logique, tour à tour dans une perspective historique, philosophique, économique, politique, sociale, et culturelle. En tant qu'artefacts-symboles de nos sociétés actuelles – objets sociaux totaux –, les médias numériques sont les outils techniques à partir desquels nous organisons la contemporanéité de notre rapport au monde : nous les concevons donc comme un prisme sociologique à partir desquels il est possible d'appréhender la société connectée. / Initiated for several decades, the development of the digital technology mark by its deep stamp the minds and the body of our contemporary society. More than a simple social phenomenon, it seems to be generaly agreed that we assist today at a true « anthropological mutation ». Nevertheless, while the analyses of the links between technology and society have been characterized for a long time by some deterministic prospects, we propose to explore in this thesis the dynamic relations which make that a technic is eminently social, and that a society is intrinsically technic. Adhering to a comprehensive approach, this research seeks to highlight the significations and the meaning systems related to the use of digital media tools, at a macro-social and a micro-social scale, to explain causally the importance we ascribed to this specific category of objects. The dynamics at work, both at an individual or collective level, are examinated in a socio-logical way, alternately with an historical, philosophical, economical, political, or socio-cultural point of view. As artefacts-symbols of our present day societies – total social object –, the digital media are the tools upon which we organize the contemporaneity of our relationship with the world : we regard them as a sociological prism from which it possible to grasp the connected society.
|
335 |
Computer vision for continuous plankton monitoring / Visão computacional para o monitoramento contínuo de plânctonDamian Janusz Matuszewski 04 April 2014 (has links)
Plankton microorganisms constitute the base of the marine food web and play a great role in global atmospheric carbon dioxide drawdown. Moreover, being very sensitive to any environmental changes they allow noticing (and potentially counteracting) them faster than with any other means. As such they not only influence the fishery industry but are also frequently used to analyze changes in exploited coastal areas and the influence of these interferences on local environment and climate. As a consequence, there is a strong need for highly efficient systems allowing long time and large volume observation of plankton communities. This would provide us with better understanding of plankton role on global climate as well as help maintain the fragile environmental equilibrium. The adopted sensors typically provide huge amounts of data that must be processed efficiently without the need for intensive manual work of specialists. A new system for general purpose particle analysis in large volumes is presented. It has been designed and optimized for the continuous plankton monitoring problem; however, it can be easily applied as a versatile moving fluids analysis tool or in any other application in which targets to be detected and identified move in a unidirectional flux. The proposed system is composed of three stages: data acquisition, targets detection and their identification. Dedicated optical hardware is used to record images of small particles immersed in the water flux. Targets detection is performed using a Visual Rhythm-based method which greatly accelerates the processing time and allows higher volume throughput. The proposed method detects, counts and measures organisms present in water flux passing in front of the camera. Moreover, the developed software allows saving cropped plankton images which not only greatly reduces required storage space but also constitutes the input for their automatic identification. In order to assure maximal performance (up to 720 MB/s) the algorithm was implemented using CUDA for GPGPU. The method was tested on a large dataset and compared with alternative frame-by-frame approach. The obtained plankton images were used to build a classifier that is applied to automatically identify organisms in plankton analysis experiments. For this purpose a dedicated feature extracting software was developed. Various subsets of the 55 shape characteristics were tested with different off-the-shelf learning models. The best accuracy of approximately 92% was obtained with Support Vector Machines. This result is comparable to the average expert manual identification performance. This work was developed under joint supervision with Professor Rubens Lopes (IO-USP). / Microorganismos planctônicos constituem a base da cadeia alimentar marinha e desempenham um grande papel na redução do dióxido de carbono na atmosfera. Além disso, são muito sensíveis a alterações ambientais e permitem perceber (e potencialmente neutralizar) as mesmas mais rapidamente do que em qualquer outro meio. Como tal, não só influenciam a indústria da pesca, mas também são frequentemente utilizados para analisar as mudanças nas zonas costeiras exploradas e a influência destas interferências no ambiente e clima locais. Como consequência, existe uma forte necessidade de desenvolver sistemas altamente eficientes, que permitam observar comunidades planctônicas em grandes escalas de tempo e volume. Isso nos fornece uma melhor compreensão do papel do plâncton no clima global, bem como ajuda a manter o equilíbrio do frágil meio ambiente. Os sensores utilizados normalmente fornecem grandes quantidades de dados que devem ser processados de forma eficiente sem a necessidade do trabalho manual intensivo de especialistas. Um novo sistema de monitoramento de plâncton em grandes volumes é apresentado. Foi desenvolvido e otimizado para o monitoramento contínuo de plâncton; no entanto, pode ser aplicado como uma ferramenta versátil para a análise de fluídos em movimento ou em qualquer aplicação que visa detectar e identificar movimento em fluxo unidirecional. O sistema proposto é composto de três estágios: aquisição de dados, detecção de alvos e suas identificações. O equipamento óptico é utilizado para gravar imagens de pequenas particulas imersas no fluxo de água. A detecção de alvos é realizada pelo método baseado no Ritmo Visual, que acelera significativamente o tempo de processamento e permite um maior fluxo de volume. O método proposto detecta, conta e mede organismos presentes na passagem do fluxo de água em frente ao sensor da câmera. Além disso, o software desenvolvido permite salvar imagens segmentadas de plâncton, que não só reduz consideravelmente o espaço de armazenamento necessário, mas também constitui a entrada para a sua identificação automática. Para garantir o desempenho máximo de até 720 MB/s, o algoritmo foi implementado utilizando CUDA para GPGPU. O método foi testado em um grande conjunto de dados e comparado com a abordagem alternativa de quadro-a-quadro. As imagens obtidas foram utilizadas para construir um classificador que é aplicado na identificação automática de organismos em experimentos de análise de plâncton. Por este motivo desenvolveu-se um software para extração de características. Diversos subconjuntos das 55 características foram testados através de modelos de aprendizagem disponíveis. A melhor exatidão de aproximadamente 92% foi obtida através da máquina de vetores de suporte. Este resultado é comparável à identificação manual média realizada por especialistas. Este trabalho foi desenvolvido sob a co-orientacao do Professor Rubens Lopes (IO-USP).
|
336 |
Apprentissage ciblé et Big Data : contribution à la réconciliation de l'estimation adaptative et de l’inférence statistique / Targeted learning in Big Data : bridging data-adaptive estimation and statistical inferenceZheng, Wenjing 21 July 2016 (has links)
Cette thèse porte sur le développement de méthodes semi-paramétriques robustes pour l'inférence de paramètres complexes émergeant à l'interface de l'inférence causale et la biostatistique. Ses motivations sont les applications à la recherche épidémiologique et médicale à l'ère des Big Data. Nous abordons plus particulièrement deux défis statistiques pour réconcilier, dans chaque contexte, estimation adaptative et inférence statistique. Le premier défi concerne la maximisation de l'information tirée d'essais contrôlés randomisés (ECRs) grâce à la conception d'essais adaptatifs. Nous présentons un cadre théorique pour la construction et l'analyse d'ECRs groupes-séquentiels, réponses-adaptatifs et ajustés aux covariable (traduction de l'expression anglaise « group-sequential, response-adaptive, covariate-adjusted », d'où l'acronyme CARA) qui permettent le recours à des procédures adaptatives d'estimation à la fois pour la construction dynamique des schémas de randomisation et pour l'estimation du modèle de réponse conditionnelle. Ce cadre enrichit la littérature existante sur les ECRs CARA notamment parce que l'estimation des effets est garantie robuste même lorsque les modèles sur lesquels s'appuient les procédures adaptatives d'estimation sont mal spécificiés. Le second défi concerne la mise au point et l'étude asymptotique d'une procédure inférentielle semi-paramétrique avec estimation adaptative des paramètres de nuisance. A titre d'exemple, nous choisissons comme paramètre d'intérêt la différence des risques marginaux pour un traitement binaire. Nous proposons une version cross-validée du principe d'inférence par minimisation ciblée de pertes (« Cross-validated Targeted Mimum Loss Estimation » en anglais, d'où l'acronyme CV-TMLE) qui, comme son nom le suggère, marie la procédure TMLE classique et le principe de la validation croisée. L'estimateur CV-TMLE ainsi élaboré hérite de la propriété typique de double-robustesse et aussi des propriétés d'efficacité du TMLE classique. De façon remarquable, le CV-TMLE est linéairement asymptotique sous des conditions minimales, sans recourir aux conditions de type Donsker. / This dissertation focuses on developing robust semiparametric methods for complex parameters that emerge at the interface of causal inference and biostatistics, with applications to epidemiological and medical research in the era of Big Data. Specifically, we address two statistical challenges that arise in bridging the disconnect between data-adaptive estimation and statistical inference. The first challenge arises in maximizing information learned from Randomized Control Trials (RCT) through the use of adaptive trial designs. We present a framework to construct and analyze group sequential covariate-adjusted response-adaptive (CARA) RCTs that admits the use of data-adaptive approaches in constructing the randomization schemes and in estimating the conditional response model. This framework adds to the existing literature on CARA RCTs by allowing flexible options in both their design and analysis and by providing robust effect estimates even under model mis-specifications. The second challenge arises from obtaining a Central Limit Theorem when data-adaptive estimation is used to estimate the nuisance parameters. We consider as target parameter of interest the marginal risk difference of the outcome under a binary treatment, and propose a Cross-validated Targeted Minimum Loss Estimator (TMLE), which augments the classical TMLE with a sample-splitting procedure. The proposed Cross-Validated TMLE (CV-TMLE) inherits the double robustness properties and efficiency properties of the classical TMLE , and achieves asymptotic linearity at minimal conditions by avoiding the Donsker class condition.
|
337 |
Fast and slow machine learning / Apprentissage automatique rapide et lentMontiel López, Jacob 07 March 2019 (has links)
L'ère du Big Data a révolutionné la manière dont les données sont créées et traitées. Dans ce contexte, de nombreux défis se posent, compte tenu de la quantité énorme de données disponibles qui doivent être efficacement gérées et traitées afin d’extraire des connaissances. Cette thèse explore la symbiose de l'apprentissage en mode batch et en flux, traditionnellement considérés dans la littérature comme antagonistes, sur le problème de la classification à partir de flux de données en évolution. L'apprentissage en mode batch est une approche bien établie basée sur une séquence finie: d'abord les données sont collectées, puis les modèles prédictifs sont créés, finalement le modèle est appliqué. Par contre, l’apprentissage par flux considère les données comme infinies, rendant le problème d’apprentissage comme une tâche continue (sans fin). De plus, les flux de données peuvent évoluer dans le temps, ce qui signifie que la relation entre les caractéristiques et la réponse correspondante peut changer. Nous proposons un cadre systématique pour prévoir le surendettement, un problème du monde réel ayant des implications importantes dans la société moderne. Les deux versions du mécanisme d'alerte précoce (batch et flux) surpassent les performances de base de la solution mise en œuvre par le Groupe BPCE, la deuxième institution bancaire en France. De plus, nous introduisons une méthode d'imputation évolutive basée sur un modèle pour les données manquantes dans la classification. Cette méthode présente le problème d'imputation sous la forme d'un ensemble de tâches de classification / régression résolues progressivement.Nous présentons un cadre unifié qui sert de plate-forme d'apprentissage commune où les méthodes de traitement par batch et par flux peuvent interagir de manière positive. Nous montrons que les méthodes batch peuvent être efficacement formées sur le réglage du flux dans des conditions spécifiques. Nous proposons également une adaptation de l'Extreme Gradient Boosting algorithme aux flux de données en évolution. La méthode adaptative proposée génère et met à jour l'ensemble de manière incrémentielle à l'aide de mini-lots de données. Enfin, nous présentons scikit-multiflow, un framework open source en Python qui comble le vide en Python pour une plate-forme de développement/recherche pour l'apprentissage à partir de flux de données en évolution. / The Big Data era has revolutionized the way in which data is created and processed. In this context, multiple challenges arise given the massive amount of data that needs to be efficiently handled and processed in order to extract knowledge. This thesis explores the symbiosis of batch and stream learning, which are traditionally considered in the literature as antagonists. We focus on the problem of classification from evolving data streams.Batch learning is a well-established approach in machine learning based on a finite sequence: first data is collected, then predictive models are created, then the model is applied. On the other hand, stream learning considers data as infinite, rendering the learning problem as a continuous (never-ending) task. Furthermore, data streams can evolve over time, meaning that the relationship between features and the corresponding response (class in classification) can change.We propose a systematic framework to predict over-indebtedness, a real-world problem with significant implications in modern society. The two versions of the early warning mechanism (batch and stream) outperform the baseline performance of the solution implemented by the Groupe BPCE, the second largest banking institution in France. Additionally, we introduce a scalable model-based imputation method for missing data in classification. This method casts the imputation problem as a set of classification/regression tasks which are solved incrementally.We present a unified framework that serves as a common learning platform where batch and stream methods can positively interact. We show that batch methods can be efficiently trained on the stream setting under specific conditions. The proposed hybrid solution works under the positive interactions between batch and stream methods. We also propose an adaptation of the Extreme Gradient Boosting (XGBoost) algorithm for evolving data streams. The proposed adaptive method generates and updates the ensemble incrementally using mini-batches of data. Finally, we introduce scikit-multiflow, an open source framework in Python that fills the gap in Python for a development/research platform for learning from evolving data streams.
|
338 |
Framtidens ERP system - Implementering av affärssystem : Förbättrad passform genom maskininlärning? / The future of ERP systems - Implementation of ERP systems : An improved fit through machine learning?Lobo Roca, Andres, Stefanovic, Alexander January 2021 (has links)
ERP leverantörer arbetar ständigt med innovation för att vara konkurrenskraftiga. För att ständigt hålla sig konkurrenskraftiga måste företag arbeta med utveckling av ERP system, Machine Learning, big data och analytics. Att använda dessa tekniker i en kombination hjälper företag att kunna utveckla automatiserade funktioner för kunder. Denna studie är genomförd utifrån en kvalitativ ansats där vi analyserar två olika företag som levererar ERP system i olika branscher. Fokuset kommer att ligga på fenomenen maskininlärning, analytics och ERP system. I denna studie skapas en förståelse för viktiga begrepp och hur de fungerar tillsammans. Men även hur ERP leverantörer arbetar med ERP system, maskininlärning och analytics för att sedan kunna se om det är möjligt att anpassa systemets passform med hjälp av maskininlärning. Studien visar att det finns hinder som företag måste hantera när de arbetar datadrivet. / ERP suppliers are constantly working to be more innovative and to be more competitive. To be able to constantly stay competitive, companies must keep on working with development of ERP systems, Machine Learning, Big Data and Analytics. To use these techniques in combination with each other helps organisations to keep on developing automated functions for their customers. This study was conducted on the basis of a qualitative approach where we analyze two different organisations that deliver ERP systems to different industries. The focus in this study is applied to the concepts of Machine Learning, Analytics and ERP systems. This study also creates an understanding of these important concepts and how they work together. But also how ERP suppliers work with ERP systems, Machine Learning and analytics to be able to see if it is possible to create a better fit for the system with Machine Learning. This study also shows that there are obstacles that organisations must deal with when they work data-driven. This study is written in Swedish.
|
339 |
Digitala verktyg i revisionsprocessen : En kvalitativ jämförelse mellan stora och små byråer / Digital tools in the audit process : a qualitative comparison of large and small audit firmsTodorovic, Ljubisa, Hoxha, Timi January 2021 (has links)
Den digitala utvecklingen pågår i samhället i stort. Revisionsbranschen är en bransch som är i förändring till följd av digitaliseringen. Digitaliseringen tar bland annat uttryck i form av olika digitala verktyg som kan användas i revisionsprocessen. Syftet med digitala verktyg är att förenkla revisionsprocessen och effektivisera. Studiens syfte är att göra en jämförelse mellan hur stora och små byråer använder sig av digitala verktyg i revisionsprocessen. I syfte att göra en jämförelse valdes en kvalitativ metod. Studiens empiri samlades in genom intervjuer av revisorer från stora och små byråer. I samband med studien framfördes tidigare forskning, definition av viktiga begrepp samt den institutionella teorin, TOE ramverket och diffusion of innovation som tillsammans utgjorde studiens teoretiska referensram. Den teoretiska referensramen har använts för att analysera studiens empiri. Empirin pekar på att stora och små byråer använder digitala verktyg på liknande sätt i revisionsprocessen. De digitala verktygen används på liknande sätt och ofta i samma delar av revisionsprocessen. Granskningen är den delen där samtliga revisorer i studien har mest nytta av digitala verktyg. Digitala verktyg effektiviserar revisorns arbete och på så sätt kan revisorn fokusera på mer komplexa delar av revisionen. Skillnaden ligger i vilka digitala verktyg som används där små byråer tenderar att köpa in externa digitala verktyg medan stora byråer utvecklar egna. Det är också tydligt att digitala verktyg utgör en viktig del i revisorns dagliga arbete. / Digitalization affects society in an extensive manner and the audit profession is no exception. Auditing is changing as a result of digitalization. The digitalization in audit expresses itself through different digital tools that can be used in the audit process. Digital tools aim to simplify and make the audit process more efficient. The purpose of the study was to compare how large and small audit firms use digital tools in the audit process. A qualitative method was chosen in order to research the area. The empirical findings of the study was developed by interviewing auditors from large and small audit firms. Previous literature along with definitions of key concepts and theories such as institutional theory, TOE framework and diffusion of innovation were described in conjunction with the study in order to create a theoretical framework. The theoretical framework was used to analyze the empirical findings of the study. The digital tools are being used in a similar way and often in the same parts of the audit process. According to the study, digital tools are most useful in the reviewing phase of the audit process. Digital tools increase the efficiency of the audit and it allows the auditor to complex parts of the audit. The difference is in which digital tools are being used where small firms tend to buy external digital tools while large firms develop their own digital tools. It is also clear that digital tools are an important part of the auditor’s daily work.
|
340 |
The Major Challenges in DDDM Implementation: A Single-Case Study : What are the Main Challenges for Business-to-Business MNCs to Implement a Data-Driven Decision-Making Strategy?Varvne, Matilda, Cederholm, Simon, Medbo, Anton January 2020 (has links)
Over the past years, the value of data and DDDM have increased significantly as technological advancements have made it possible to store and analyze large amounts of data at a reasonable cost. This has resulted in completely new business models that has disrupt whole industries. DDDM allows businesses to rely their decisions on data, as opposed to on gut feeling. Up until this point, literature is eligible to provide a general view of what are the major challenges corporations encounter when implementing a DDDM strategy. However, as the field is still rather new, the challenges identified are yet very general and many corporations, especially B2B MNCs selling consumer goods, seem to struggle with this implementation. Hence, a single-case study on such a corporation, named Alpha, was carried out with the purpose to explore what are their major challenges in this process. Semi-structured interviews revealed evidence of four major findings, whereas, execution and organizational culture were supported in existing literature, however, two additional findings associated with organizational structure and consumer behavior data were discovered in the case of Alpha. Based on this, the conclusions drawn were that B2B MNCs selling consumer goods encounter the challenges of identifying local markets as frontrunners for strategies such as the one to become more data-driven, as well as the need to find a way to retrieve consumer behavior data. With these two main challenges identified, it can provide a starting point for managers when implementing DDDM strategies in B2B MNCs selling consumer goods in the future.
|
Page generated in 0.0488 seconds