• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 18
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 274
  • 116
  • 65
  • 56
  • 49
  • 47
  • 47
  • 44
  • 43
  • 38
  • 31
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Δυναμική ανάθεση υπολογιστικών πόρων και συ-ντονισμός εκτέλεσης πολύπλοκων διαδικασιών ανάλυσης δεδομένων σε υποδομή Cloud / Dynamic allocation of computational resources and workflow orchestration for data analysis in the Cloud

Σφήκα, Νίκη 10 June 2015 (has links)
Το Υπολογιστικό Νέφος (Cloud Computing) χαρακτηρίζεται ως το νέο μοντέλο ανάπτυξης λογισμικού και παροχής υπηρεσιών στον τομέα των Τεχνολογιών Πληροφορικής και Επικοινωνιών. Τα κύρια χαρακτηριστικά του είναι η κατά απαίτηση διάθεση υπολογιστικών πόρων, η απομακρυσμένη πρόσβαση σε αυτούς μέσω διαδικτύου και η ευελιξία των παρεχόμενων υπηρεσιών. Η ευελιξία επιτρέπει την αναβάθμιση ή υποβάθμιση των υπολογιστικών πόρων σύμφωνα με τις απαιτήσεις του τελικού χρήστη. Επιπλέον, η συνεχής αύξηση του μεγέθους της παραγόμενης από διάφορες πηγές πληροφορίας (διαδίκτυο, επιστημονικά πειράματα) έχει δημιουργήσει μία τεράστια ποσότητα πολύπλοκων και διάχυτων ψηφιακών δεδομένων . Η απόσπαση χρήσιμης γνώσης από μεγάλου όγκου ψηφιακά δεδομένα απαιτεί έξυπνες και ευκόλως επεκτάσιμες υπηρεσίες ανάλυσης, εργαλεία προγραμματισμού και εφαρμογές. Επομένως, η δυνατότητα της ελαστικότητας και της επεκτασιμότητας έχει κάνει το Υ-πολογιστικό Νέφος να είναι μια αναδυόμενη τεχνολογία αναφορικά με τις αναλύσεις μεγάλου όγκου δεδομένων οι οποίες απαιτούν παραλληλισμό, πολύπλοκες ροές ανάλυσης και υψηλό υπολογιστικό φόρτο εργασίας. Για την καλύτερη δυνατή διαχείριση πολύπλοκων αναλύσεων και ενορχήστρωση των απαιτούμενων διαδικασιών, είναι απαραίτητη η ένθεση ροών εργασιών. Μια ροή εργασίας είναι ένα οργανωμένο σύνολο ενεργειών που πρέπει να πραγματοποιηθούν για να επιτευχθεί μια εμπορική ή ερευνητική διεργασία, καθώς και οι μεταξύ τους εξαρτήσεις αφού κάθε ενέργεια αποτελείται από ορισμένα βήματα που πρέπει να εκτελεστούν σε συγκεκριμένη σειρά. Στην παρούσα μεταπτυχιακή διπλωματική εργασία δημιουργήθηκε ένα σύστημα για τη δυναμική διαχείριση των προσφερόμενων πόρων σε μια υποδομή Υπολογιστικού Νέφους και την εκτέλεση κατανεμημένων υλοποιήσεων υπολογιστικής ανάλυσης δεδομένων. Συγκεκριμένα, η εφαρμογή, αφού λάβει από το χρήστη τα δεδομένα εισόδου για την έναρξη μιας νέας διαδικασίας ανάλυσης, εξετάζει τα δεδομένα των επιστημονικών προβλημάτων καθώς και την πολυπλοκότητά τους και παρέχει δυναμικά και αυτόματα τους αντίστοιχους υπολογιστικούς πόρους για την εκτέλεση της αντίστοιχης λειτουργίας ανάλυσής τους. Επίσης, επιτρέπει την καταγραφή της ανάλυσης και αναθέτει τον συντονισμό της διαδικασίας σε αντίστοιχες ροές εργασιών ώστε να διευκολυνθεί η ενορχήστρωση των παρεχόμενων πόρων και η παρακολούθηση της εκτέλεσης της υπολογιστικής διαδικασίας. Η συγκεκριμένη μεταπτυχιακή εργασία, με τη χρήση τόσο των παρεχόμενων υπηρεσιών μιας υποδομής Υπολογιστικού Νέφους όσο και των δυνατοτήτων που παρέχουν οι ροές εργασιών στην διαχείριση των εργασιών, έχει σαν αποτέλεσμα να απλουστεύει την πρόσβαση, τον έλεγχο, την οργάνωση και την εκτέλεση πολύπλοκων και παράλληλων υλοποιήσεων ανάλυσης δεδομένων από την στιγμή εισαγωγής των δεδομένων από το χρήστη έως τον υπολογισμό του τελικού αποτελέσματος. Πιο αναλυτικά η διπλωματική εργασία επικεντρώθηκε στη πρόταση μιας ολοκληρωμένης λύσης για: 1. τη παροχή μιας εφαρμογής στην οποία ο χρήστης θα έχει τη δυνατότητα να εισάγεται και να ξεκινά μια σύνθετη ανάλυση δεδομένων, 2. τη δημιουργία της κατάλληλης υποδομής για τη δυναμική διάθεση πόρων από μια cloud υποδομή ανάλογα με τις ανάγκες του εκάστοτε προβλήματος και 3. την αυτοματοποιημένη εκτέλεση και συντονισμό της διαδικασίας της ανάλυσης με χρήση ροών εργασιών. Για την επικύρωση και αξιολόγηση της εφαρμογής, αναπτύχθηκε η πλατφόρμα IRaaS η οποία παρέχει στους χρήστες του τη δυνατότητα επίλυσης προβλημάτων πολλαπλών πεδίων / πολλαπλών φυσικών. Η πλατφόρμα IRaaS βασίστηκε πάνω στην προαναφερόμενη εφαρμογή για τη δυναμική ανάθεση υπολογιστικών πόρων και συντονισμός εκτέλεσης πολύπλοκων διαδικασιών ανάλυσης δεδομένων. Εκτελώντας μια σειρά αναλύσεων παρατηρήθηκε ότι η συγκεκριμένη εφαρμογή παρέχει καλύτερους χρόνους εκτέλεσης, μικρότερη δέσμευση υπολογιστικών πόρων και κατά συνέπεια μικρότερο κόστος για τις αναλύσεις. Η εγκατάσταση της πλατφόρμας IRaaS για την εκτέλεση των πειραμάτων έγινε στην υποδομή Υπολογιστικού Νέφους του εργαστηρίου Αναγνώρισης Προτύπων. Η υποδομή βασίστηκε στα λογισμικά XenServer και Cloudstack, τα οποία εγκαταστάθηκαν και παραμετροποιήθηκαν στα πλαίσια της παρούσας εργασίας. / Cloud Computing is the new software development and service providing model in the area of Information and Communication Technologies. The main aspects of Cloud Computing are the on-demand allocation of computational resources, the remote access to the latter via the Internet and the elasticity of the provided services. Elasticity provides the capability to scale the computational resources depending on the computational needs. The continuous proliferation of data warehouses, webpages, audio and video streams, tweets, and blogs is generating a massive amount of complex and pervasive digital data. Extracting useful knowledge from huge digital datasets requires smart and scalable analytics services, programming tools, and applications. Due to the aspects of elasticity and scalability, Cloud Computing has become an emerging technology regarding to big data analysis, which demands parallelization, complex workflow analysis and massive computational workload. In this respect, workflows have an important role in managing complex flows and orchestrating the required processes. A workflow is an orchestrated set of activities that are necessary in order to complete a commercial or scientific task, as well as any dependencies between these tasks, since each one of them can be further decomposed into finer tasks that need to be executed in a predefined order. In this thesis, a system is presented that dynamically allocates the available resources provided by a cloud infrastructure and orchestrates the execution of complex and distrib-uted data analysis on these allocated resources. In particular, the system calculates the required computational resources (memory and CPU) based on the size of the input data and on the available resources of the cloud infrastructure, concluding to allocate dynamically the most suitable resources. . Moreover, the application offers the ability to coordinate the distributed analysis process utilising workflows for the orchestration and monitoring of the different tasks of the computational flow execution. Taking advantage of the services provided by a cloud infrastructure as well as the functionality of workflows in task management, this thesis has resulted in simplifying access, control, coordination and execution of complex and parallel data analysis implementations from the moment that a user enters a set of input data to the computation of the final result. In this context, this thesis focuses on a comprehensive and integrated solution that: 1. provides an application, through which the user is able to log in and start a complex data analysis, 2. offers the necessary infrastructure for dynamically allocating the cloud resources of, based on the needs of the particular problem, and 3. executes and coordinates the analysis process automatically by leveraging workflows. In order to validate and evaluate the application, the IRaaS platform was developed, offering the ability of solving multi-domain/multi-physics problems. The IRaaS platform is based on the aforementioned system in order to enable the dynamic allocation of computational resources and to coordinate the execution of complex data analysis processes. By executing a series of experiments with different input data, we observed that the presented application resulted in improved execution times, better allocation of computational resources and, thus, lower cost. In order to perform experiments, the IRaaS platform was set up on the cloud infrastructure of Pattern Recognition laboratory. In the context of this thesis, a new infrastructure has been installed and parameterized based on XenServer as virtualization hypervisor and CloudStack platform for the creation of a private cloud infrastructure.
192

Exploration of parallel graph-processing algorithms on distributed architectures / Exploration d’algorithmes de traitement parallèle de graphes sur architectures distribuées

Collet, Julien 06 December 2017 (has links)
Avec l'explosion du volume de données produites chaque année, les applications du domaine du traitement de graphes ont de plus en plus besoin d'être parallélisées et déployées sur des architectures distribuées afin d'adresser le besoin en mémoire et en ressource de calcul. Si de telles architectures larges échelles existent, issue notamment du domaine du calcul haute performance (HPC), la complexité de programmation et de déploiement d’algorithmes de traitement de graphes sur de telles cibles est souvent un frein à leur utilisation. De plus, la difficile compréhension, a priori, du comportement en performances de ce type d'applications complexifie également l'évaluation du niveau d'adéquation des architectures matérielles avec de tels algorithmes. Dans ce contexte, ces travaux de thèses portent sur l’exploration d’algorithmes de traitement de graphes sur architectures distribuées en utilisant GraphLab, un Framework de l’état de l’art dédié à la programmation parallèle de tels algorithmes. En particulier, deux cas d'applications réelles ont été étudiées en détails et déployées sur différentes architectures à mémoire distribuée, l’un venant de l’analyse de trace d’exécution et l’autre du domaine du traitement de données génomiques. Ces études ont permis de mettre en évidence l’existence de régimes de fonctionnement permettant d'identifier des points de fonctionnements pertinents dans lesquels on souhaitera placer un système pour maximiser son efficacité. Dans un deuxième temps, une étude a permis de comparer l'efficacité d'architectures généralistes (type commodity cluster) et d'architectures plus spécialisées (type serveur de calcul hautes performances) pour le traitement de graphes distribué. Cette étude a démontré que les architectures composées de grappes de machines de type workstation, moins onéreuses et plus simples, permettaient d'obtenir des performances plus élevées. Cet écart est d'avantage accentué quand les performances sont pondérées par les coûts d'achats et opérationnels. L'étude du comportement en performance de ces architectures a également permis de proposer in fine des règles de dimensionnement et de conception des architectures distribuées, dans ce contexte. En particulier, nous montrons comment l’étude des performances fait apparaitre les axes d’amélioration du matériel et comment il est possible de dimensionner un cluster pour traiter efficacement une instance donnée. Finalement, des propositions matérielles pour la conception de serveurs de calculs plus performants pour le traitement de graphes sont formulées. Premièrement, un mécanisme est proposé afin de tempérer la baisse significative de performance observée quand le cluster opère dans un point de fonctionnement où la mémoire vive est saturée. Enfin, les deux applications développées ont été évaluées sur une architecture à base de processeurs basse-consommation afin d'étudier la pertinence de telles architectures pour le traitement de graphes. Les performances mesurés en utilisant de telles plateformes sont encourageantes et montrent en particulier que la diminution des performances brutes par rapport aux architectures existantes est compensée par une efficacité énergétique bien supérieure. / With the advent of ever-increasing graph datasets in a large number of domains, parallel graph-processing applications deployed on distributed architectures are more and more needed to cope with the growing demand for memory and compute resources. Though large-scale distributed architectures are available, notably in the High-Performance Computing (HPC) domain, the programming and deployment complexity of such graphprocessing algorithms, whose parallelization and complexity are highly data-dependent, hamper usability. Moreover, the difficult evaluation of performance behaviors of these applications complexifies the assessment of the relevance of the used architecture. With this in mind, this thesis work deals with the exploration of graph-processing algorithms on distributed architectures, notably using GraphLab, a state of the art graphprocessing framework. Two use-cases are considered. For each, a parallel implementation is proposed and deployed on several distributed architectures of varying scales. This study highlights operating ranges, which can eventually be leveraged to appropriately select a relevant operating point with respect to the datasets processed and used cluster nodes. Further study enables a performance comparison of commodity cluster architectures and higher-end compute servers using the two use-cases previously developed. This study highlights the particular relevance of using clustered commodity workstations, which are considerably cheaper and simpler with respect to node architecture, over higher-end systems in this applicative context. Then, this thesis work explores how performance studies are helpful in cluster design for graph-processing. In particular, studying throughput performances of a graph-processing system gives fruitful insights for further node architecture improvements. Moreover, this work shows that a more in-depth performance analysis can lead to guidelines for the appropriate sizing of a cluster for a given workload, paving the way toward resource allocation for graph-processing. Finally, hardware improvements for next generations of graph-processing servers areproposed and evaluated. A flash-based victim-swap mechanism is proposed for the mitigation of unwanted overloaded operations. Then, the relevance of ARM-based microservers for graph-processing is investigated with a port of GraphLab on a NVIDIA TX2-based architecture.
193

Analysis of Eye-Tracking Data in Visualization and Data Space

Alam, Sayeed Safayet 12 May 2017 (has links)
Eye-tracking devices can tell us where on the screen a person is looking. Researchers frequently analyze eye-tracking data manually, by examining every frame of a visual stimulus used in an eye-tracking experiment so as to match 2D screen-coordinates provided by the eye-tracker to related objects and content within the stimulus. Such task requires significant manual effort and is not feasible for analyzing data collected from many users, long experimental sessions, and heavily interactive and dynamic visual stimuli. In this dissertation, we present a novel analysis method. We would instrument visualizations that have open source code, and leverage real-time information about the layout of the rendered visual content, to automatically relate gaze-samples to visual objects drawn on the screen. Since such visual objects are shown in a visualization stand for data, the method would allow us to necessarily detect data that users focus on or Data of Interest (DOI). This dissertation has two contributions. First, we demonstrated the feasibility of collecting DOI data for real life visualization in a reliable way which is not self-evident. Second, we formalized the process of collecting and interpreting DOI data and test whether the automated DOI detection can lead to research workflows, and insights not possible with traditional, manual approaches.
194

Analytics on Indoor Moving Objects with Applications in Airport Baggage Tracking

Ahmed, Tanvir 20 June 2016 (has links)
A large part of people's lives are spent in indoor spaces such as office and university buildings, shopping malls, subway stations, airports, museums, community centers, etc. Such kind of spaces can be very large and paths inside the locations can be constrained and complex. Deployment of indoor tracking technologies like RFID, Bluetooth, and Wi-Fi can track people and object movements from one symbolic location to another within the indoor spaces. The resulting tracking data can be massive in volume. Analyzing these large volumes of tracking data can reveal interesting patterns that can provide opportunities for different types of location-based services, security, indoor navigation, identifying problems in the system, and finally service improvements. In addition to the huge volume, the structure of the unprocessed raw tracking data is complex in nature and not directly suitable for further efficient analysis. It is essential to develop efficient data management techniques and perform different kinds of analysis to make the data beneficial to the end user. The Ph.D. study is sponsored by the BagTrack Project (http://daisy.aau.dk/bagtrack). The main technological objective of this project is to build a global IT solution to significantly improve the worldwide aviation baggage handling quality. The Ph.D. study focuses on developing data management techniques for efficient and effective analysis of RFID-based symbolic indoor tracking data, especially for the baggage tracking scenario. First, the thesis describes a carefully designed a data warehouse solution with a relational schema sitting underneath a multidimensional data cube, that can handle the many complexities in the massive non-traditional RFID baggage tracking data. The thesis presents the ETL flow that loads the data warehouse with the appropriate tracking data from the data sources. Second, the thesis presents a methodology for mining risk factors in RFID baggage tracking data. The aim is to find the factors and interesting patterns that are responsible for baggage mishandling. Third, the thesis presents an online risk prediction technique for indoor moving objects. The target is to develop a risk prediction system that can predict the risk of an object in real-time during its operation so that the object can be saved from being mishandled. Fourth, the thesis presents two graph-based models for constrained and semi-constrained indoor movements, respectively. These models are used for mapping the tracking records into mapping records that represent the entry and exit times of an object at a symbolic location. The mapping records are then used for finding dense locations. Fifth, the thesis presents an efficient indexing technique, called the $DLT$-Index, for efficiently processing dense location queries as well as point and interval queries. The outcome of the thesis can contribute to the aviation industry for efficiently processing different analytical queries, finding problems in baggage management systems, and improving baggage handling quality. The developed data management techniques also contribute to the spatio-temporal data management and data mining field. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
195

Performance Characterization and Optimization of In-Memory Data Analytics on a Scale-up Server

Awan, Ahsan Javed January 2017 (has links)
The sheer increase in the volume of data over the last decade has triggered research in cluster computing frameworks that enable web enterprises to extract big insights from big data. While Apache Spark defines the state of the art in big data analytics platforms for (i) exploiting data-flow and in-memory computing and (ii) for exhibiting superior scale-out performance on the commodity machines, little effort has been devoted to understanding the performance of in-memory data analytics with Spark on modern scale-up servers. This thesis characterizes the performance of in-memory data analytics with Spark on scale-up servers.Through empirical evaluation of representative benchmark workloads on a dual socket server, we have found that in-memory data analytics with Spark exhibit poor multi-core scalability beyond 12 cores due to thread level load imbalance and work-time inflation (the additional CPU time spent by threads in a multi-threaded computation beyond the CPU time required to perform the same work in a sequential computation). We have also found that workloads are bound by the latency of frequent data accesses to the memory. By enlarging input data size, application performance degrades significantly due to the substantial increase in wait time during I/O operations and garbage collection, despite 10% better instruction retirement rate (due to lower L1cache misses and higher core utilization).For data accesses, we have found that simultaneous multi-threading is effective in hiding the data latencies. We have also observed that (i) data locality on NUMA nodes can improve the performance by 10% on average,(ii) disabling next-line L1-D prefetchers can reduce the execution time by upto14%. For garbage collection impact, we match memory behavior with the garbage collector to improve the performance of applications between 1.6xto 3x and recommend using multiple small Spark executors that can provide up to 36% reduction in execution time over single large executor. Based on the characteristics of workloads, the thesis envisions near-memory and near storage hardware acceleration to improve the single-node performance of scale-out frameworks like Apache Spark. Using modeling techniques, it estimates the speed-up of 4x for Apache Spark on scale-up servers augmented with near-data accelerators. / <p>QC 20171121</p>
196

Modelo de identificación de ciberamenazas para PYMES de servicios tecnológicos usando herramientas de Data Analytics / Cyberthreat Identification Model for Technology Services SMEs using Data Analytics

Villayzan Chancafe, Renzo Adrian, Gutierrez Perona, Juan Diego 27 October 2020 (has links)
Este proyecto tiene como propósito mejorar la capacidad que tienen las empresas pequeñas y medianas de detectar ciberamenazas que puedan encontrarse en sus ambientes, y que no hayan sido detectadas por las herramientas de seguridad tradicionales, como los antivirus. El objetivo del proyecto fue desarrollar un modelo de análisis de logs que permita identificar ciberamenazas utilizando herramientas de Data Analytics en PYMES de servicios tecnológicos. De acuerdo con un estudio realizado por el Ponemon Institute en el 2018, el 82% de las empresas encuestadas reportaron que los exploits maliciosos evadieron sus soluciones de antivirus. El modelo propuesto fue validado mediante una simulación de ataque de phishing, el cual permitió generar un fileless malware que consiguió generar persistencia en la computadora de la víctima. Los registros obtenidos a partir de la simulación fueron utilizados para entrenar un modelo de machine learning, el cual proporcionó la información necesaria para clasificar el evento según las tácticas y técnicas del framework Att&ck del MITRE. Finalmente, con la clasificación del ataque, se tiene la capacidad de proponer estrategias de mitigación y mejoras en las políticas de seguridad de información de la empresa. Adicionalmente, al analizar los resultados obtenidos a partir del experimento de machine learning, se evidenció su eficacia, pues presentaba mejores métricas en comparación con investigaciones académicas similares. / The purpose of this project is to improve the ability of small and medium-sized companies to detect cyber threats that may be found in their environments, and that have not been detected by traditional security tools, such as antivirus. The main objective of the project was to develop a log analysis model that allows identifying cyber threats using Data Analytics tools in technology services SMEs. According to a study conducted by the Ponemon Institute in 2018, 82% of surveyed companies reported that malicious exploits evaded their antivirus solutions. The proposed model was validated by means of a phishing attack simulation, that delivered a fileless malware attack which managed to generate persistence on the victim's computer. The logs obtained from the attack simulation were used to train a machine learning model that provided the necessary information to classify the event according to the tactics and techniques of the MITRE Att&ck framework. Finally, with the classification of the attack, we had the ability to propose mitigation strategies and improvements in the company's information security policies. Additionally, when analyzing the results obtained from the machine learning experiment, its effectiveness was proved, as it presented better metrics compared to similar academic research. / Tesis
197

Big Data usage in the Maritime industry : A Qualitative Study for the use of Port State Control (PSC) inspection data by shipping professionals

Ampatzidis, Dimitrios January 2021 (has links)
Vessels during their calls on ports is possible to have an inspection from the local Port State Control (PSC) authorities regarding their implementation of International Maritime Organization guidelines for safety and security. This qualitative study focuses on how shipping professionals understand and use Big Data in the PSC inspection databases, what characteristics they recognize these data should have, what value they attach to those big data, and how they use them to support the decision-making process within their organizations. This study conducted interviews with shipping professionals, collected their perspectives, and analyzed their sayings with Thematic Analysis to reach the study's outcome. Many researchers have been discussed Big Data characteristics and the value an organization or a researcher could have from Big Data and Analytics. However, there is no universally accepted theory regarding Big Data characteristics and the value for the database users. The research concluded that Big Data from the PSC inspections procedures provides valid and helpful information that broadens professionals' understanding of inspection control and safety need, through this, it is possible to upscale their internal operations and their decision-making procedures as long as these data are characterized by volume, velocity, veracity, and complexity.
198

Využití datové analýzy v rámci interního auditu / Use of Data Analysis in Internal Audit

Daňková, Natalie January 2019 (has links)
The master thesis deals with proving benefits of the use of data analysis within internal audit on the example of a concrete audit of procurement cards of company Zebra Technologies. In theoretical part are described the basic theoretical starting points concerning the internal audit methodology. The practical part includes a description of the selected analysis executed during the examined audit.
199

The Major Challenges in DDDM Implementation: A Single-Case Study : What are the Main Challenges for Business-to-Business MNCs to Implement a Data-Driven Decision-Making Strategy?

Varvne, Matilda, Cederholm, Simon, Medbo, Anton January 2020 (has links)
Over the past years, the value of data and DDDM have increased significantly as technological advancements have made it possible to store and analyze large amounts of data at a reasonable cost. This has resulted in completely new business models that has disrupt whole industries. DDDM allows businesses to rely their decisions on data, as opposed to on gut feeling. Up until this point, literature is eligible to provide a general view of what are the major challenges corporations encounter when implementing a DDDM strategy. However, as the field is still rather new, the challenges identified are yet very general and many corporations, especially B2B MNCs selling consumer goods, seem to struggle with this implementation. Hence, a single-case study on such a corporation, named Alpha, was carried out with the purpose to explore what are their major challenges in this process. Semi-structured interviews revealed evidence of four major findings, whereas, execution and organizational culture were supported in existing literature, however, two additional findings associated with organizational structure and consumer behavior data were discovered in the case of Alpha. Based on this, the conclusions drawn were that B2B MNCs selling consumer goods encounter the challenges of identifying local markets as frontrunners for strategies such as the one to become more data-driven, as well as the need to find a way to retrieve consumer behavior data. With these two main challenges identified, it can provide a starting point for managers when implementing DDDM strategies in B2B MNCs selling consumer goods in the future.
200

Business Intelligence: Competencies and Cross-Functional Integration : A Case Study at ASSA ABLOY

Borgsø, Jon Ariel, Svensson, Maxim January 2021 (has links)
Business Intelligence (BI) and data analytics has grown to become one of the most prioritized technological investments for organizations today. For BI systems to be valuable for organizations’ decision making and support of end-users, research argues that competencies of multiple areas need to be represented in the work with BI. This includes knowledge of both IT and business domains, where challenges such as lack of domain competencies have been identified in the Swedish industry sector. The purpose of this study is therefore to investigate the representation of BI competencies, with focus on IT, business domain, data analytics and their integration. The research is conducted through a qualitative case study at ASSA ABLOY, a leading company in the Swedish industry sector, where interviews are made with respondents involved with five BI tools from different functions of the company. The empirical findings show that competencies of IT and business domains are represented to a higher degree than data analytics. In addition, the findings show that while integration between these areas is being promoted, there is potential for further involvement with in-house IT and a need for cross-border knowledge to bridge the gap between functions involved with BI.

Page generated in 0.2803 seconds