281 |
Das Industrial Internet – Engineering Prozesse und IT-LösungenEigner, Martin January 2016 (has links)
Das Engineering unterliegt derzeit einem massiven Wandel. Smarte Systeme und Technologien, Cybertronische Produkte, Big Data und Cloud Computing im Kontext des Internet der Dinge und Dienste sowie Industrie 4.0. Der amerikanische Ansatz des „Industrial Internet“ beschreibt diese (R)evolution jedoch weitaus besser als der eingeschränkte und stark deutsch geprägte Begriff Industrie 4.0. Industrial Internet berücksichtigt den gesamten Produktlebenszyklus und adressiert sowohl Konsum- und Investitionsgüter als auch Dienstleistungen. Dieser Beitrag beleuchtet das zukunftsträchtige Trendthema und bietet fundierte Einblicke in die vernetzte Engineering-Welt von morgen, auf Ihre Konstruktionsmethoden und –prozesse sowie auf die IT-Lösungen.
|
282 |
Approximate Data Analytics SystemsLe Quoc, Do 22 January 2018 (has links)
Today, most modern online services make use of big data analytics systems to extract useful information from the raw digital data. The data normally arrives as a continuous data stream at a high speed and in huge volumes. The cost of handling this massive data can be significant. Providing interactive latency in processing the data is often impractical due to the fact that the data is growing exponentially and even faster than Moore’s law predictions. To overcome this problem, approximate computing has recently emerged as a promising solution. Approximate computing is based on the observation that many modern applications are amenable to an approximate, rather than the exact output. Unlike traditional computing, approximate computing tolerates lower accuracy to achieve lower latency by computing over a partial subset instead of the entire input data. Unfortunately, the advancements in approximate computing are primarily geared towards batch analytics and cannot provide low-latency guarantees in the context of stream processing, where new data continuously arrives as an unbounded stream. In this thesis, we design and implement approximate computing techniques for processing and interacting with high-speed and large-scale stream data to achieve low latency and efficient utilization of resources.
To achieve these goals, we have designed and built the following approximate data analytics systems:
• StreamApprox—a data stream analytics system for approximate computing. This system supports approximate computing for low-latency stream analytics in a transparent way and has an ability to adapt to rapid fluctuations of input data streams. In this system, we designed an online adaptive stratified reservoir sampling algorithm to produce approximate output with bounded error.
• IncApprox—a data analytics system for incremental approximate computing. This system adopts approximate and incremental computing in stream processing to achieve high-throughput and low-latency with efficient resource utilization. In this system, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error.
• PrivApprox—a data stream analytics system for privacy-preserving and approximate computing. This system supports high utility and low-latency data analytics and preserves user’s privacy at the same time. The system is based on the combination of privacy-preserving data analytics and approximate computing.
• ApproxJoin—an approximate distributed joins system. This system improves the performance of joins — critical but expensive operations in big data systems. In this system, we employed a sketching technique (Bloom filter) to avoid shuffling non-joinable data items through the network as well as proposed a novel sampling mechanism that executes during the join to obtain an unbiased representative sample of the join output. Our evaluation based on micro-benchmarks and real world case studies shows that these systems can achieve significant performance speedup compared to state-of-the-art systems by tolerating negligible accuracy loss of the analytics output. In addition, our systems allow users to systematically make a trade-off between accuracy and throughput/latency and require no/minor modifications to the existing applications.
|
283 |
Big data och CRM:s roll för konkurrenskraft i B2B-företag : En fallstudie kring hur konkurrenskraften i B2B-företag påverkas av big data och CRMGeorgeson, Sofia, Holmes, Nicole January 2020 (has links)
As a result of the increased use of the Internet, there are a number of challenges and opportunities regarding the ability to strengthen or weaken the competitive advantage between organizations. One of the challenges is an increased need to gather and manage large amounts of data (big data) as well as the need to develop processes and strategies for customer relationship management (CRM). This study aimed to investigate how and why big data and business processes for CRM can affect the competitive advantage of B2B companies by offering a theoretical perspective and an analysis of a relevant case in the field. This was studied by applying a qualitative case study where theory and empirical data were gathered through qualitative interviews with respondents who work with CRM and data at the selected B2B company. The results of the study showed that the chosen case involving a B2B company in the technology industry can strengthen its competitive advantage as a result of including CRM in its business processes by applying strategies, value creation, multichannels, IT and data and performance assessment in its customer-related activities and processes. The results also showed that the gathering of data in customer-related processes and activities as well as analyzes of this data is a useful asset as it entails the ability to strongly impact the interaction with customers. An unforeseen discovery made was the fact that the respondents showed a reluctance to describe the circumstance of whether they work with big data, despite the fact that most respondents according to the study definition work with big data in their customer-related processes and activities. This discovery can be the basis for future research in the field of big data and B2B companies. / Som ett resultat av den ökade användningen av internet har ett flertal utmaningar och möjligheter avseende förmågan att stärka respektive försvaga organisationers konkurrenskraft uppstått. Bland annat har ett ökat behov uppstått för insamling och hantering av stora datamängder (big data), samt för att utforma processer och strategier för kundrelationshantering (CRM). Denna studie syftade därför till att undersöka hur och varför big data och verksamhetsprocesser för CRM kan påverka konkurrenskraften hos B2B-företag, genom att erbjuda ett teoretisk perspektiv och en analys av ett relevant fall inom området. Detta studerades genom att tillämpa en kvalitativ strategi i form av fallstudiedesign där dels teori och empiri samlades in genom kvalitativa intervjuer med respondenter som jobbar med CRM och data på det utvalda B2B-företaget. Studiens resultat visade att det valda fallet som avser ett B2B-företag inom teknologibranschen kan förstärka sin konkurrenskraft som ett resultat av att inkludera CRM i dess verksamhetsprocesser genom att tillämpa strategier, värdeskapande, multikanaler, IT och data samt utvärdering i sina kundrelaterade aktiviteter och processer. Resultatet visade även att respondenterna uppfattade insamlingen av data i de kundrelaterade processerna och aktiviteterna samt analyser av denna data som en användbar tillgång då det medför förmågan att starkt påverka interaktionen med kunderna på ett positivt sätt. En oförutsedd upptäckt som gjordes var det faktum att respondenterna visade en avhållsamhet avseende påståendet om huruvida de jobbar med big data, trots att flertalet respondenter enligt studiens definition jobbar med big data i sina kundrelaterade processer och aktiviteter. Denna upptäckt gav upphov till framtida forskning inom området för big data och B2B-företag.
|
284 |
‘Data over intuition’ – How big data analytics revolutionises the strategic decision-making processes in enterprisesHöcker, Filip, Brand, Finn January 2020 (has links)
Background: Digital technologies are increasingly transforming traditional businesses, and their pervasive impact is leading to a radical restructuring of entire industries. While the significance of generating competitive advantages for businesses utilizing big data analytics is recognized, there is still a lack of consensus of big data analytics influencing strategic decision-making in organisations. As big data and big data analytics become increasingly common, understanding the factors influencing decision-making quality becomes of paramount importance for businesses. Purpose: This thesis investigates how big data and big data analytics affect the operational strategic decision-making processes in enterprises through the theoretical lens of the strategy-as-practice framework. Method: The study follows an abductive research approach by testing a theory (i.e., strategy-aspractice) through the use of a qualitative research design. A single case study of IKEA was conducted to generate the primary data for this thesis. Sampling is carried out internally at IKEA by first identifying the heads of the different departments within the data analysis and from there applying the snowball sampling technique, to increase the number of interviewees and to ensure the collection of enough data for coding. Findings: The findings show that big data analytics has a decisive influence on practitioners. At IKEA, data analysts have become an integral part of the operational strategic decision-making processes and discussions are driven by data and rigor rather than by gut and intuition. In terms of practices, it became apparent that big data analytics has led to a more performance-oriented use of strategic tools and enabling IKEA to make strategic decisions in real-time, which not only increases agility but also mitigates the risk of wrong decisions.
|
285 |
Digitalisering och kunskapshantering : En studie av redovisningsbranschen / Digitalisering och kunskapshantering En studie av redovisningsbranschen : Digitalisation and Knowledge management A study in the accounting industryHultgren, Frida, Nyberg, Josefine January 2020 (has links)
Digital tools like Big data and Artificial Intelligence will have a more important element in the accounting industry and in order to utilize the tools in the organization, employees need to develop their knowledge in order to interpret and analyze information. In the study we investigate how organizations work with knowledge management in regards to how digitalisation affects the industry. The knowledge that exists in the organization can be divided into two dimensions; tacit and explicit knowledge. The dimensions are used to identify how organizations utilize the knowledge that exists in organizations and how it can be converted and made available to employees. The study was conducted through a case study based on qualitative data collected via interviews. The study shows that the common denominator was that all organizations stored explicit knowledge in digital systems. How the organizations worked with tacit knowledge varied and some organizations worked more actively and had the opportunity to exchange experiences or worked continuously with the exchange of knowledge through coaching. In some organizations, it was up to the employees themselves to identify and share their knowledge. / Digitaliseringens påverkan på redovisningsbranschen ökar allt mer och vi befinner oss i ett skede där digitalisering är en central del i många organisationer. Digitala verktyg som Big data och artificiell intelligens kommer ha ett viktigare inslag i redovisningsbranschen och för att tillgodogöra verktygen i organisationen krävs det att medarbetarna utvecklar sina kunskaper. Allt fler arbetsuppgifter kräver en förmåga att kunna tolka och analysera information och detta bygger på medarbetarens kunskaper. I studien undersöker vi hur organisationer arbetar med kunskapshantering med hänsyn till hur digitalisering påverkar branschen. Kunskapen som finns i organisationen kan delas upp i två dimensioner; tyst och explicit kunskap. Dimensionerna används för att identifiera hur organisationer tillvaratar kunskapen som finns i organisationer och hur den kan konverteras och göras tillgänglig för medarbetarna. Studien har genomförts som en fallstudie som bygger på kvalitativ datainsamling via intervjuer. Intervjuernas struktur karakteriseras av semistrukturerade intervjuer och behandlade frågor om kunskapshantering och digitalisering. Analysen av intervjuerna har gjorts med inspiration från grundad teori som kategoriserar samband och avvikelser i respondenternas svar. Studien visar att organisationerna i redovisningsbranschen hanterar kunskap på olika sätt. Det som är gemensamt är att alla organisationer använder sig av explicit kunskap som lagras i digitala system. Hur organisationerna arbetar med tyst kunskap varierar och några organisationer arbetar mer aktivt och har avsatta tillfällen för erfarenhetsutbyten eller arbetar löpande med utbyte av kunskap via coachning. I några organisationer är det upp till medarbetarna själva att identifiera och dela med sig av sin kunskap. En tolkning är att utifrån resultaten som framgick av studien verkar kunskapshantering var ett område som organisationer behöver arbeta mer med. Organisationernas fokus ligger för närvarande på effektiva lösningar för arbetsuppgifter och att med effektiva lösningar går vi mot en mer kunskapsdriven bransch där medarbetarnas kunskap behöver tas tillvara på mer än någonsin
|
286 |
Processing data sources with big data frameworks / Behandla datakällor med big data-ramverkNyström, Simon, Lönnegren, Joakim January 2016 (has links)
Big data is a concept that is expanding rapidly. As more and more data is generatedand garnered, there is an increasing need for efficient solutions that can be utilized to process all this data in attempts to gain value from it. The purpose of this thesis is to find an efficient way to quickly process a large number of relatively small files. More specifically, the purpose is to test two frameworks that can be used for processing big data. The frameworks that are tested against each other are Apache NiFi and Apache Storm. A method is devised in order to, firstly, construct a data flow and secondly, construct a method for testing the performance and scalability of the frameworks running this data flow. The results reveal that Apache Storm is faster than Apache NiFi, at the sort of task that was tested. As the number of nodes included in the tests went up, the performance did not always do the same. This indicates that adding more nodes to a big data processing pipeline, does not always result in a better performing setup and that, sometimes, other measures must be made to heighten the performance. / Big data är ett koncept som växer snabbt. När mer och mer data genereras och samlas in finns det ett ökande behov av effektiva lösningar som kan användas föratt behandla all denna data, i försök att utvinna värde från den. Syftet med detta examensarbete är att hitta ett effektivt sätt att snabbt behandla ett stort antal filer, av relativt liten storlek. Mer specifikt så är det för att testa två ramverk som kan användas vid big data-behandling. De två ramverken som testas mot varandra är Apache NiFi och Apache Storm. En metod beskrivs för att, för det första, konstruera ett dataflöde och, för det andra, konstruera en metod för att testa prestandan och skalbarheten av de ramverk som kör dataflödet. Resultaten avslöjar att Apache Storm är snabbare än NiFi, på den typen av test som gjordes. När antalet noder som var med i testerna ökades, så ökade inte alltid prestandan. Detta visar att en ökning av antalet noder, i en big data-behandlingskedja, inte alltid leder till bättre prestanda och att det ibland krävs andra åtgärder för att öka prestandan.
|
287 |
Big Data Analytics : A potential way to Competitive Performance / Big Data Analytics : Ett potentiell väg för konkurrenskraftig prestandaOlsén, Cleas, Lindskog, Gustav January 2021 (has links)
Big data analytics (BDA) has become an increasingly popular topic over the years amongst academics and practitioners alike. Big data, which is an important part of BDA, was originally defined with three Vs, being volume, velocity and variety. In later years more Vs have surfaced to better accommodate the current need. The analytics of BDA consists of different methods of analysing gathered data. Analysing data can provide insights to organisations which in turn can give organisations competitive advantage and enhance their businesses. Looking into the necessary resources needed to build big data analytic capabilities (BDAC), this thesis sought out to find how Swedish organisations enable and use BDA in their businesses. This thesis also investigated whether BDA could lead to performance enhancement and competitive advantage to organisations. A theoretical framework based on previous studies was adapted and used in order to help answer the thesis purpose. A qualitative study was deemed the most suitable for this study using semi-structured interviews. Previous studies in this field pointed to the fact that organisations may not be aware of how or why to use or enable BDA. According to current literature, different resources are needed to work in conjunction with each other in order to create BDAC and enable BDA to be utilized. Several different studies discuss challenges such as the culture of the organisation, human skills, and the need for top management to support BDA initiatives to succeed. The findings from the interviews in this study indicated that in a Swedish context the different resources, such as data, technical skills, and data driven culture, amongst others, are being used to enable BDA. Furthermore, the result showed that business process improvements are a first staple in organisations use of benefiting from BDA. This is because of the ease and security in calculating profits and effect from such an investment. Depending on how far an organisation have come in their transformation process they may also innovate and/or create products or services from insights made possible from BDA. / Big data analytics (BDA) har blivit ett populärt ämne under de senaste åren hos akademiker och utövare. Big data, som är en viktig del av BDA, var först definierad med tre Vs, volym, hastighet och varietet. På senare år har flertalet V framkommit för att bättre uttrycka det nuvarande behovet. Analysdelen i BDA består av olika metoder av analysering av data. Dataanalysering som görs kan ge insikter till organisationer, som i sin tur kan ge organisationer konkurrensfördelar och förbättra deras företag. Genom att definiera de resurser som krävs för att bygga big data analytic capabilities (BDAC), så försökte denna avhandling att visa hur svenska organisationer möjliggör och använder BDA i sina företag. Avhandlingen försökte också härleda om BDA kan leda till prestandaförbättringar och konkurrensfördelar för organisationer. Ett teoretiskt ramverk, baserat på tidigare studier, anpassades och användes för att hjälpa till att svara på avhandlingens syfte. En kvalitativ studie utsågs vara den mest passande ansatsen, tillsammans med semi-strukturerade intervjuer. Tidigare studier inom området visade på att organisationer kanske inte helt är medvetna om hur eller varför BDA möjliggörs eller kan användas. Enligt den nuvarande litteraturen så behöver olika resurser arbeta tillsammans med varandra för att skapa BDAC och möjliggöra att BDA kan utnyttjas till fullo. Flera olika studier diskuterade utmaningar såsom kulturen inom organisationen, kompetens hos anställda och att ledningen behöver stödja BDA initiativ för att lyckas. Fynden från studiens intervjuer indikerade, i ett svenskt sammanhang, att olika resurser såsom data, tekniska färdigheter och datadriven kultur bland annat, används för att möjliggöra BDA. Fortsättningsvis påvisade resultatet att affärsprocessförbättring är en första stapel i användandet av fördelarna från BDA. Anledningen till det är för att det är lättare och säkrare med beräkning av förtjänst och effekt från en sådan investering. Beroende på hur långt en organisation har kommit i deras transformationsprocess kan de också innovera och/eller skapa produkter eller tjänster som möjliggjorts av insikter från BDA.
|
288 |
A qualitative analysis to investigate the enablers of big data analytics that impacts sustainable supply chain / Investigation qualitative des facteurs qui permettent l’analyse de Big Data et la chaîne d’approvisionnementRodriguez Pellière, Lineth Arelys 27 August 2019 (has links)
Les académiques et les professionnels ont déjà montré que le Big Data et l'analyse prédictive, également connus dans la littérature sous le nom de BDPA, peuvent jouer un rôle fondamental dans la transformation et l'amélioration des fonctions de l'analyse de la chaîne d'approvisionnement durable (SSCA). Cependant, les connaissances sur la meilleure manière d'utiliser la BDPA pour augmenter simultanément les performances sociales, environnementale et financière. Par conséquent, avec les connaissances tirées de la littérature sur la SSCA, il semble que les entreprises peinent encore à mettre en oeuvre les pratiques de la SSCA. Les chercheursconviennent qu'il est encore nécessaire de comprendre les techniques, outils et facteurs des concepts de base de la SSCA pour adoption. C’est encore plus important d’intégrer BDPA en tant qu’atout stratégique dans les activités commerciales. Par conséquent, cette étude examine, par exemple, quels sont les facteurs de SSCA et quels sont les outils et techniques de BDPA qui permettent de mettre en évidence le 3BL (pour ses abréviations en anglais : "triple bottom line") des rendements de durabilité (environnementale, sociale et financière) via SCA.La thèse a adopté un constructionniste modéré, car elle comprend l’impact des facteurs Big Data sur les applications et les indicateurs de performance de la chaîne logistique analytique et durable. La thèse a également adopté un questionnaire et une étude de cas en tant que stratégie de recherche permettant de saisir les différentes perceptions des personnes et des entreprises dans l'application des mégadonnées sur la chaîne d'approvisionnement analytique et durable. La thèse a révélé une meilleure vision des facteurs pouvant influencer l'adoption du Big Data dans la chaîne d'approvisionnement analytique et durable. Cette recherche a permis de déterminer les facteurs en fonction des variables ayant une incidence sur l'adoption de BDPA pour SSCA, des outils et techniques permettant la prise de décision via SSCA et du coefficient de chaque facteur pour faciliter ou retarder l'adoption de la durabilité. Il n'a pas été étudié avant. Les résultats de la thèse suggèrent que les outils actuels utilisés par les entreprises ne peuvent pas analyser de grandes quantités de données par eux-mêmes. Les entreprises ont besoin d'outils plus appropriés pour effectuer ce travail. / Scholars and practitioners already shown that Big Data and Predictive Analytics also known in the literature as BDPA can play a pivotal role in transforming and improving the functions of sustainable supply chain analytics (SSCA). However, there is limited knowledge about how BDPA can be best leveraged to grow social, environmental and financial performance simultaneously. Therefore, with the knowledge coming from literature around SSCA, it seems that companies still struggled to implement SSCA practices. Researchers agree that is still a need to understand the techniques, tools, and enablers of the basics SSCA for its adoption; this is even more important to integrate BDPA as a strategic asset across business activities. Hence, this study investigates, for instance, what are the enablers of SSCA, and what are the tools and techniques of BDPA that enable the triple bottom line (3BL) of sustainability performances through SCA. The thesis adopted moderate constructionism since understanding of how the enablers of big data impacts sustainable supply chain analytics applications and performances. The thesis also adopted a questionnaire and a case study as a research strategy in order to capture the different perceptions of the people and the company on big data application on sustainable supply chain analytics. The thesis revealed a better insight of the factors that can affect in the adoption of big data on sustainable supply chain analytics. This research was capable to find the factors depending on the variable loadings that impact in the adoption of BDPA for SSCA, tools and techniques that enable decision making through SSCA, and the coefficient of each factor for facilitating or delaying sustainability adoption that wasn’t investigated before. The findings of the thesis suggest that the current tools that companies are using by itself can’t analyses data. The companies need more appropriate tools for the data analysis.
|
289 |
Big Data Competence Center ScaDS Dresden/LeipzigRahm, Erhard, Nagel, Wolfgang E., Peukert, Eric, Jäkel, René, Gärtner, Fabian, Stadler, Peter F., Wiegreffe, Daniel, Zeckzer, Dirk, Lehner, Wolfgang 16 June 2023 (has links)
Since its launch in October 2014, the Competence Center for Scalable Data Services and Solutions (ScaDS) Dresden/Leipzig carries out collaborative research on Big Data methods and their use in challenging data science applications of different domains, leading to both general, and application-specific solutions and services. In this article, we give an overview about the structure of the competence center, its primary goals and research directions. Furthermore, we outline selected research results on scalable data platforms, distributed graph analytics, data augmentation and integration and visual analytics. We also briefly report on planned activities for the second funding period (2018-2021) of the center.
|
290 |
Data Science Professionals’ Innovation with Big Data Analytics: The Essential Role of Commitment and Organizational ContextAbouei, Mahdi January 2023 (has links)
Implementing Big Data Analytics (BDA) has been widely known as a major source of competitiveness and innovation. While previous research suggests several process models and identifies critical factors for the successful implementation of BDA, there is a lack of understanding of how this organizational process is realized by its primary recipients, that is, Data Science Professionals (DSPs) whose innovation with BDA technologies stands at the core of big data-driven innovation. In particular, far less understood are the motivational and contextual factors that derive DSPs’ innovation with BDA technologies. This study proposes that commitment is the force that can attach DSPs to the BDA implementation process and motivate them to engage in innovative behaviors. It also introduces two organizational mechanisms, namely, BDA communication reciprocity and BDA leader theme-specific reputation, that can be employed to develop this constructive force in DSPs. Inspired by this, a theoretical model was developed based on the assertions of Commitment in Workplace Theory and the literature on creativity in organizations to assess the impact of DSPs’ commitment to BDA implementation and organizational context on their innovation with BDA technologies.
This study theorizes that communication reciprocity and leader theme-specific reputation influence the three components of DSPs’ commitment (affective, continuance, and normative) to BDA implementation through their perceived participation in organizational decision-making and positive uncertainty, which, in turn, derive DSP’s innovation with BDA technologies. To further enrich the theorization, the moderating role of DSPs’ competency on the effect of DSPs’ components of commitment on their innovation with BDA technologies is investigated. Predictions were tested following an experimental vignette methodology with 240 subjects where the two organizational mechanisms were manipulated. Results indicate that organizational mechanisms provoke mediating psychological perceptions, though with varying strengths. In addition, results suggest that DSPs’ innovation with BDA technologies is primarily rooted in their affective and continuance commitments, and DSPs’ competency interacts with DSPs’ affective commitment to affect their innovation with BDA technologies. This research enhances the theoretical understanding of the role of commitment and organizational context in fostering DSPs’ innovation with BDA technologies. The results of this study also offer suggestions for information systems implementation practitioners on the effectiveness of organizational mechanisms that facilitate big data-driven innovation. / Thesis / Doctor of Philosophy (PhD)
|
Page generated in 0.1411 seconds