61 |
The Major Challenges in DDDM Implementation: A Single-Case Study : What are the Main Challenges for Business-to-Business MNCs to Implement a Data-Driven Decision-Making Strategy?Varvne, Matilda, Cederholm, Simon, Medbo, Anton January 2020 (has links)
Over the past years, the value of data and DDDM have increased significantly as technological advancements have made it possible to store and analyze large amounts of data at a reasonable cost. This has resulted in completely new business models that has disrupt whole industries. DDDM allows businesses to rely their decisions on data, as opposed to on gut feeling. Up until this point, literature is eligible to provide a general view of what are the major challenges corporations encounter when implementing a DDDM strategy. However, as the field is still rather new, the challenges identified are yet very general and many corporations, especially B2B MNCs selling consumer goods, seem to struggle with this implementation. Hence, a single-case study on such a corporation, named Alpha, was carried out with the purpose to explore what are their major challenges in this process. Semi-structured interviews revealed evidence of four major findings, whereas, execution and organizational culture were supported in existing literature, however, two additional findings associated with organizational structure and consumer behavior data were discovered in the case of Alpha. Based on this, the conclusions drawn were that B2B MNCs selling consumer goods encounter the challenges of identifying local markets as frontrunners for strategies such as the one to become more data-driven, as well as the need to find a way to retrieve consumer behavior data. With these two main challenges identified, it can provide a starting point for managers when implementing DDDM strategies in B2B MNCs selling consumer goods in the future.
|
62 |
Computational Intelligent Sensor-rank Consolidation Approach for Industrial Internet of Things (IIoT)Mekala, M. S., Rizwan, Patan, Khan, Mohammad S. 01 January 2021 (has links)
Continues field monitoring and searching sensor data remains an imminent element emphasizes the influence of the Internet of Things (IoT). Most of the existing systems are concede spatial coordinates or semantic keywords to retrieve the entail data, which are not comprehensive constraints because of sensor cohesion, unique localization haphazardness. To address this issue, we propose deep learning inspired sensor-rank consolidation (DLi-SRC) system that enables 3-set of algorithms. First, sensor cohesion algorithm based on Lyapunov approach to accelerate sensor stability. Second, sensor unique localization algorithm based on rank-inferior measurement index to avoid redundancy data and data loss. Third, a heuristic directive algorithm to improve entail data search efficiency, which returns appropriate ranked sensor results as per searching specifications. We examined thorough simulations to describe the DLi-SRC effectiveness. The outcomes reveal that our approach has significant performance gain, such as search efficiency, service quality, sensor existence rate enhancement by 91%, and sensor energy gain by 49% than benchmark standard approaches.
|
63 |
Big Data and AI in Customer Support : A study of Big Data and AI in customer service with a focus on value-creating factors from the employee perspectiveLicina, Aida January 2020 (has links)
The advance of the Internet has resulted in an immensely interconnected world, which produces a tremendous amount of data. It has come to change our daily lives and behaviours tremendously. The trend is especially seen in the field of e-commerce where the customers have started to require more and more from the product and service providers. Moreover, with the rising competition, the companies have to adopt new ways of doing things to keep their position on the market as well as keeping and attracting new customers. One important factor for this is excelling customer service. Today, companies adopt technologies like BDA and AI to enhance and provide excellent customer service. This study aims to investigate how two Swedish cooperations extract value from their customer services with the help of BDA and AI. This study also strives to create an understanding of the expectations, requirements and implications of the technologies from the participants' perspectives that in this case are the employees of these mentioned businesses. Moreover, many fail to see the true potential that the technologies can bring and especially in the field of customer service. This study helps to address these challenges and by pinpointing the ’value- factors’ that companies participating in this study extracts, it might encourage the implementation of digital technologies in the customer service with no regard to the size of the company. This thesis was conducted with a qualitative approach and with semi-structured interviews and systematic observations with two Swedish companies acting on the Chinese market. The findings from the interviews, conducted with these selected companies, present that the companies actively use BDA and AI in their customer service. Moreover, several value-factors are pinpointed in the different stages of customer service. The most reoccurring themes are: ”proactive support”, ”relationship establishment”, ”identifying attitudes and behaviours” and ”real-time support”. Moreover, as for the value-creating factors before and after the actual interaction the reoccurring themes are ”competitive advantage”, ”high-impact customer insights”, ”classification”, ”practicality”, as well as ”reflection and development”. This essay provides knowledge that can help companies to further their understanding of how important customer service along with BDA and AI is and how they can support competitive advantage as well as customer loyalty. Since the thesis only focused on the investigation of Swedish organizations on the Shanghainese market, it would be of interest to continue further research on Swedish companies as China is seen to be in the forefront when it comes to utilizing these technologies.
|
64 |
Analyzing Small Businesses' Adoption of Big Data Security AnalyticsMathias, Henry 01 January 2019 (has links)
Despite the increased cost of data breaches due to advanced, persistent threats from malicious sources, the adoption of big data security analytics among U.S. small businesses has been slow. Anchored in a diffusion of innovation theory, the purpose of this correlational study was to examine ways to increase the adoption of big data security analytics among small businesses in the United States by examining the relationship between small business leaders' perceptions of big data security analytics and their adoption. The research questions were developed to determine how to increase the adoption of big data security analytics, which can be measured as a function of the user's perceived attributes of innovation represented by the independent variables: relative advantage, compatibility, complexity, observability, and trialability. The study included a cross-sectional survey distributed online to a convenience sample of 165 small businesses. Pearson correlations and multiple linear regression were used to statistically understand relationships between variables. There were no significant positive correlations between relative advantage, compatibility, and the dependent variable adoption; however, there were significant negative correlations between complexity, trialability, and the adoption. There was also a significant positive correlation between observability and the adoption. The implications for positive social change include an increase in knowledge, skill sets, and jobs for employees and increased confidentiality, integrity, and availability of systems and data for small businesses. Social benefits include improved decision making for small businesses and increased secure transactions between systems by detecting and eliminating advanced, persistent threats.
|
65 |
Datadriven affärsanalys : en studie om värdeskapande mekanismer / Data-driven business analysis : a study about value creating mechanismsAdamsson, Anton, Jönsson, Julius January 2021 (has links)
Affärsanalys är en ökande trend som många organisationer idag använder på grund av potentialen att fastställa värdefulla insikter, ökad lönsamhet och förbättrad operativ effektivitet. Något som visat sig vara problematiskt då det önskade resultatet inte alltid är en självklarhet. Syftet med studien är att undersöka hur modeföretag kan använda datadriven affärsanalys för att generera positiva insikter genom värdeskapande mekanismer. Utifrån semistrukturerade intervjuer med anställda på ett modeföretag har vi, med utgångspunkt i tidigare forskning, kartlagt hur datadriven affärsanalys brukas för att skapa värde genom att applicera en processmodell på verksamheten. Empirin resulterade i tre värdefulla insikter (1) Det studerade företaget använder affärsanalys för ökad lönsamhet (2) Företagets data tillgångar är tillräckliga för att utvinna värdefulla insikter (3) Vidare såg vi att företaget arbetar med influencers vilket är en ny affärsanalys-funktion som inte definierats i tidigare forskning. / Business analysis is an increasingly popular trend that many organisations use because of its potential to establish valuable insights, increased profitability and improved operational efficiency. Something that has proved to be rather problematic as the desired results rarely is a certainty. The purpose of the study is to examine how fashion retailers can use business analytics to generate positive insights through value-creating mechanisms by applying a process model. Based on semi-structured interviews with the employees of a fashion company and a starting point in previous research, we have mapped how business analysis can be used to obtain value. The empirical study resulted in three valuable insights (1) The examined organisation uses business analysis to increase profitability. (2) The data assets of the organisation are enough to acquire valuable insights. (3) Further we discovered that the organisation uses influencers as a valuable asset and can be categorised as a business analysis capability, previously undefined in preceding research.
|
66 |
Particulate Matter MattersMeyer, Holger J., Gruner, Hannes, Waizenegger, Tim, Woltmann, Lucas, Hartmann, Claudio, Lehner, Wolfgang, Esmailoghli, Mahdi, Redyuk, Sergey, Martinez, Ricardo, Abedjan, Ziawasch, Ziehn, Ariane, Rabl, Tilmann, Markl, Volker, Schmitz, Christian, Serai, Dhiren Devinder, Gava, Tatiane Escobar 15 June 2023 (has links)
For the second time, the Data Science Challenge took place as part of the 18th symposium “Database Systems for Business, Technology and Web” (BTW) of the Gesellschaft für Informatik (GI). The Challenge was organized by the University of Rostock and sponsored by IBM and SAP. This year, the integration, analysis and visualization around the topic of particulate matter pollution was the focus of the challenge. After a preselection round, the accepted participants had one month to adapt their developed approach to a substantiated problem, the real challenge. The final presentation took place at BTW 2019 in front of the prize jury and the attending audience. In this article, we give a brief overview of the schedule and the organization of the Data Science Challenge. In addition, the problem to be solved and its solution will be presented by the participants.
|
67 |
Comparison of Popular Data Processing SystemsNasr, Kamil January 2021 (has links)
Data processing is generally defined as the collection and transformation of data to extract meaningful information. Data processing involves a multitude of processes such as validation, sorting summarization, aggregation to name a few. Many analytics engines exit today for largescale data processing, namely Apache Spark, Apache Flink and Apache Beam. Each one of these engines have their own advantages and drawbacks. In this thesis report, we used all three of these engines to process data from the Carbon Monoxide Daily Summary Dataset to determine the emission levels per area and unit of time. Then, we compared the performance of these 3 engines using different metrics. The results showed that Apache Beam, while offered greater convenience when writing programs, was slower than Apache Flink and Apache Spark. Spark Runner in Beam was the fastest runner and Apache Spark was the fastest data processing framework overall. / Databehandling definieras generellt som insamling och omvandling av data för att extrahera meningsfull information. Databehandling involverar en mängd processer som validering, sorteringssammanfattning, aggregering för att nämna några. Många analysmotorer lämnar idag för storskalig databehandling, nämligen Apache Spark, Apache Flink och Apache Beam. Var och en av dessa motorer har sina egna fördelar och nackdelar. I den här avhandlingsrapporten använde vi alla dessa tre motorer för att bearbeta data från kolmonoxidens dagliga sammanfattningsdataset för att bestämma utsläppsnivåerna per område och tidsenhet. Sedan jämförde vi prestandan hos dessa 3 motorer med olika mått. Resultaten visade att Apache Beam, även om det erbjuds större bekvämlighet när man skriver program, var långsammare än Apache Flink och Apache Spark. Spark Runner in Beam var den snabbaste löparen och Apache Spark var den snabbaste databehandlingsramen totalt.
|
68 |
Data-Driven Simulation Modeling of Construction and Infrastructure Operations Using Process Knowledge DiscoveryAkhavian, Reza 01 January 2015 (has links)
Within the architecture, engineering, and construction (AEC) domain, simulation modeling is mainly used to facilitate decision-making by enabling the assessment of different operational plans and resource arrangements, that are otherwise difficult (if not impossible), expensive, or time consuming to be evaluated in real world settings. The accuracy of such models directly affects their reliability to serve as a basis for important decisions such as project completion time estimation and resource allocation. Compared to other industries, this is particularly important in construction and infrastructure projects due to the high resource costs and the societal impacts of these projects. Discrete event simulation (DES) is a decision making tool that can benefit the process of design, control, and management of construction operations. Despite recent advancements, most DES models used in construction are created during the early planning and design stage when the lack of factual information from the project prohibits the use of realistic data in simulation modeling. The resulting models, therefore, are often built using rigid (subjective) assumptions and design parameters (e.g. precedence logic, activity durations). In all such cases and in the absence of an inclusive methodology to incorporate real field data as the project evolves, modelers rely on information from previous projects (a.k.a. secondary data), expert judgments, and subjective assumptions to generate simulations to predict future performance. These and similar shortcomings have to a large extent limited the use of traditional DES tools to preliminary studies and long-term planning of construction projects. In the realm of the business process management, process mining as a relatively new research domain seeks to automatically discover a process model by observing activity records and extracting information about processes. The research presented in this Ph.D. Dissertation was in part inspired by the prospect of construction process mining using sensory data collected from field agents. This enabled the extraction of operational knowledge necessary to generate and maintain the fidelity of simulation models. A preliminary study was conducted to demonstrate the feasibility and applicability of data-driven knowledge-based simulation modeling with focus on data collection using wireless sensor network (WSN) and rule-based taxonomy of activities. The resulting knowledge-based simulation models performed very well in properly predicting key performance measures of real construction systems. Next, a pervasive mobile data collection and mining technique was adopted and an activity recognition framework for construction equipment and worker tasks was developed. Data was collected using smartphone accelerometers and gyroscopes from construction entities to generate significant statistical time- and frequency-domain features. The extracted features served as the input of different types of machine learning algorithms that were applied to various construction activities. The trained predictive algorithms were then used to extract activity durations and calculate probability distributions to be fused into corresponding DES models. Results indicated that the generated data-driven knowledge-based simulation models outperform static models created based upon engineering assumptions and estimations with regard to compatibility of performance measure outputs to reality.
|
69 |
Big Maritime Data: The promises and perils of the Automatic Identification System : Shipowners and operators’ perceptionsKouvaras, Andreas January 2022 (has links)
The term Big Data has been gaining importance both at the academic and at the business level. Information technology plays a critical role in shipping since there is a high demand for fast transfer and communication between the parts of a shipping contract. The development of Automatic Identification System (AIS) is intended to improve maritime safety by tracking the vessels and exchange inter-ship information. This master’s thesis purpose was to a) investigate in which business decisions the Automatic Identification System helps the shipowners and operators (i.e., users), b) find the benefits and perils arisen from its use, and c) investigate the possible improvements based on the users’ perceptions. This master’s thesis is a qualitative study using the interpretivism paradigm. Data were collected through semi-structured interviews. A total of 6 people participated with the following criteria: a) position on technical department or DPA or shipowner, b) participating on business decisions, c) shipping company owns a fleet, and d) deals with AIS data. The Thematic Analysis led to twenty-six codes, twelve categories and five concepts. Empirical findings showed that AIS data mostly contributes to make strategic business decisions. Participants are interested in using AIS data to measure the efficiency of their fleet and ports, to estimate the fuel consumption, to reduce their costs, to protect the environment and people’s health, to analyze the trade market, to predict the time of arrival, the optimal route and speed, to maintain highest security levels and to reduce the inaccuracies due to manual input of some AIS attributes. Finally, participants mentioned some AIS challenges including technological improvement (e.g., transponders, antennas) as well as the operation of autonomous vessels. Finally, this master’s thesis contributes using the prescriptive and descriptive theories and help stakeholders to find new decisions while researchers and developers to advance their products.
|
70 |
Node Centric Community Detection and Evolutional Prediction in Dynamic NetworksOluwafolake A Ayano (13161288) 27 July 2022 (has links)
<p> </p>
<p>Advances in technology have led to the availability of data from different platforms such as the web and social media platforms. Much of this data can be represented in the form of a network consisting of a set of nodes connected by edges. The nodes represent the items in the networks while the edges represent the interactions between the nodes. Community detection methods have been used extensively in analyzing these networks. However, community detection in evolving networks has been a significant challenge because of the frequent changes to the networks and the need for real-time analysis. Using Static community detection methods for analyzing dynamic networks will not be appropriate because static methods do not retain a network’s history and cannot provide real-time information about the communities in the network.</p>
<p>Existing incremental methods treat changes to the network as a sequence of edge additions and/or removals; however, in many real-world networks, changes occur when a node is added with all its edges connecting simultaneously. </p>
<p>For efficient processing of such large networks in a timely manner, there is a need for an adaptive analytical method that can process large networks without recomputing the entire network after its evolution and treat all the edges involved with a node equally. </p>
<p>We proposed a node-centric community detection method that incrementally updates the community structure in the network using the already known structure of the network to avoid recomputing the entire network from the scratch and consequently achieve a high-quality community structure. The results from our experiments suggest that our approach is efficient for incremental community detection of node-centric evolving networks. </p>
|
Page generated in 0.0977 seconds