111 |
A Grounded Theory Model of the Relationship between Big Data and an Analytics Driven Supply Chain Competitive StrategyBaitalmal, Mohammad Hamza 12 1900 (has links)
The technology for storing and using big data is evolving rapidly and those that can keep pace are likely to garner additional competitive advantages. One approach to uncovering existing practice in a manner that provides insights for building theory is the use of grounded theory. The current research employs qualitative research following a grounded theory approach to explore gap in understanding the relationship between big data (BD) and the supply chain (SC). In this study eight constructs emerged: Organizational and environmental factors, big data and supply chain analytics, alignment, data governance, big data capabilities, cost of quality, risk analysis and supply chain performance. The contribution of this research resulted in a new theoretical framework that provides researchers and practitioners with an ability to visualize the relationship between collection and use of BD and the SC. This framework provides a model for future researchers to test the relationships posited and continue to extend understanding about how BD can benefit SC practice. While it is anticipated that the proposed theoretical framework will evolve as a result of future examination and enhanced understating of the relationships shown the framework presented represents a critical first step for moving the literature and practice forward.
|
112 |
AN EMPIRICAL STUDY OF AN INNOVATIVE CLUSTERING APPROACH TOWARDS EFFICIENT BIG DATA ANALYSISBowers, Jacob Robert 01 May 2024 (has links) (PDF)
The dramatic growth of big data presents formidable challenges for traditional clustering methodologies, which often prove unwieldy and computationally expensive when processing vast quantities of data. This study explores a novel clustering approach exemplified by Sow & Grow, a density-based clustering algorithm akin to DBSCAN developed to address the issues inherent to big data by enabling end-users to strategically allocate computational resources toward regions of noted interest. Achieved through a unique procedure of seeding points and subsequently fostering their growth into coherent clusters, this method significantly reduces computational waste by ignoring insignificant segments of the dataset and provides information relevant to the end user. The implementation of this algorithm developed as part of this research showcases promising results in various experimental settings, exhibiting notable speedup over conventional clustering methods. Additionally, the incorporation of dynamic load balancing further enhances the algorithm's performance, ensuring optimal resource utilization across parallel processing threads when handling superclusters or unbalanced data distributions. Through a detailed study of the theoretical underpinnings of this innovative clustering approach and the limitations of traditional clustering techniques, this research demonstrates the practical utility of the Sow & Grow algorithm in expediting the clustering processes while providing results pertinent to end users.
|
113 |
The rise of Big Data in Austrian tax consultancies : How stakeholders of Austrian tax consultancies assess the potential influence of Big DataBuchner, Marc January 2020 (has links)
The fact is that every individual leaves behind vast amounts of data, companies collect this data and use the knowledge gained from it in a variety of ways. One area that is lucrative for the use of Big data is the financial sector. A prominent example of the use of Big data is real-time stock market insights. However, there are still industries in which Big data is not yet used for various reasons. One of these industries is the tax consulting sector, which will be the focus of this research. With its high entry hurdles, direct dependence on the legislator, and the associated atypical data sets, the tax consulting sector represents a special use case within the financial sector. Because big data has not been used in the tax consulting sector yet and that the setting is atypical compared to other sectors, a closer analysis of potential influences on services, the working environment, and quality is of particular interest here. This analysis is the core of this study and was carried out using an interpretative qualitative approach in the form of a case study. In this case study, the three most important stakeholders of Austrian tax consultancies- employers, employees, and clients - were interviewed on the one hand through interviews and on the other hand through a survey with open-ended questions. The results were then compared in the discussion with the changes that studies in other fields have identified. The results of the study showed that the stakeholders predominantly assume that the quality of services will improve significantly through the use of big data, especially in accounting and business management services. Stakeholders also predicted a positive development concerning the range of services offered. It was also predicted that the range of services offered could increase on the one hand and that services of a business management nature could benefit enormously on the other. In the area of the working environment, employees said that increased training activity and process adaptation would be the only significant changes. In the area of risks, all three stakeholder groups agreed and mentioned data protection. Interesting differences between the three stakeholder groups were on the one hand that the employers gave very detailed answers, which allows the assumption that they have already thought carefully about the topic of big data. On the other hand, in contrast to the other two groups, the employees did not primarily think of their area (work environment) in the analysis, but of that of the clients and thus of the provision of the service. This underlines the strong focus on client satisfaction and encourages a more intensive involvement in the design process. In contrast to other studies, this thesis analyses the influences on the areas not from a retrospective point of view, but a prospective point of view. This approach allows an unbiased look at the opinions of stakeholders and thus provides the best possible information for the design of big data tools for the tax consulting sector. Besides, by comparing this with changes found in other studies, it is possible to estimate how the use of big data in the tax consulting sector differs from other sectors.
|
114 |
Big Data Analytics: A Literature Review PerspectiveAl-Shiakhli, Sarah January 2019 (has links)
Big data is currently a buzzword in both academia and industry, with the term being used todescribe a broad domain of concepts, ranging from extracting data from outside sources, storingand managing it, to processing such data with analytical techniques and tools.This thesis work thus aims to provide a review of current big data analytics concepts in an attemptto highlight big data analytics’ importance to decision making.Due to the rapid increase in interest in big data and its importance to academia, industry, andsociety, solutions to handling data and extracting knowledge from datasets need to be developedand provided with some urgency to allow decision makers to gain valuable insights from the variedand rapidly changing data they now have access to. Many companies are using big data analyticsto analyse the massive quantities of data they have, with the results influencing their decisionmaking. Many studies have shown the benefits of using big data in various sectors, and in thisthesis work, various big data analytical techniques and tools are discussed to allow analysis of theapplication of big data analytics in several different domains.
|
115 |
Big Data ValidationRizk, Raya January 2018 (has links)
With the explosion in usage of big data, stakes are high for companies to develop workflows that translate the data into business value. Those data transformations are continuously updated and refined in order to meet the evolving business needs, and it is imperative to ensure that a new version of a workflow still produces the correct output. This study focuses on the validation of big data in a real-world scenario, and implements a validation tool that compares two databases that hold the results produced by different versions of a workflow in order to detect and prevent potential unwanted alterations, with row-based and column-based statistics being used to validate the two versions. The tool was shown to provide accurate results in test scenarios, providing leverage to companies that need to validate the outputs of the workflows. In addition, by automating this process, the risk of human error is eliminated, and it has the added benefit of improved speed compared to the more labour-intensive manual alternative. All this allows for a more agile way of performing updates on the data transformation workflows by improving on the turnaround time of the validation process.
|
116 |
Big Data usage in the Maritime industry : A Qualitative Study for the use of Port State Control (PSC) inspection data by shipping professionalsAmpatzidis, Dimitrios January 2021 (has links)
Vessels during their calls on ports is possible to have an inspection from the local Port State Control (PSC) authorities regarding their implementation of International Maritime Organization guidelines for safety and security. This qualitative study focuses on how shipping professionals understand and use Big Data in the PSC inspection databases, what characteristics they recognize these data should have, what value they attach to those big data, and how they use them to support the decision-making process within their organizations. This study conducted interviews with shipping professionals, collected their perspectives, and analyzed their sayings with Thematic Analysis to reach the study's outcome. Many researchers have been discussed Big Data characteristics and the value an organization or a researcher could have from Big Data and Analytics. However, there is no universally accepted theory regarding Big Data characteristics and the value for the database users. The research concluded that Big Data from the PSC inspections procedures provides valid and helpful information that broadens professionals' understanding of inspection control and safety need, through this, it is possible to upscale their internal operations and their decision-making procedures as long as these data are characterized by volume, velocity, veracity, and complexity.
|
117 |
Big Data and AI in Customer Support : A study of Big Data and AI in customer service with a focus on value-creating factors from the employee perspectiveLicina, Aida January 2020 (has links)
The advance of the Internet has resulted in an immensely interconnected world, which produces a tremendous amount of data. It has come to change our daily lives and behaviours tremendously. The trend is especially seen in the field of e-commerce where the customers have started to require more and more from the product and service providers. Moreover, with the rising competition, the companies have to adopt new ways of doing things to keep their position on the market as well as keeping and attracting new customers. One important factor for this is excelling customer service. Today, companies adopt technologies like BDA and AI to enhance and provide excellent customer service. This study aims to investigate how two Swedish cooperations extract value from their customer services with the help of BDA and AI. This study also strives to create an understanding of the expectations, requirements and implications of the technologies from the participants' perspectives that in this case are the employees of these mentioned businesses. Moreover, many fail to see the true potential that the technologies can bring and especially in the field of customer service. This study helps to address these challenges and by pinpointing the ’value- factors’ that companies participating in this study extracts, it might encourage the implementation of digital technologies in the customer service with no regard to the size of the company. This thesis was conducted with a qualitative approach and with semi-structured interviews and systematic observations with two Swedish companies acting on the Chinese market. The findings from the interviews, conducted with these selected companies, present that the companies actively use BDA and AI in their customer service. Moreover, several value-factors are pinpointed in the different stages of customer service. The most reoccurring themes are: ”proactive support”, ”relationship establishment”, ”identifying attitudes and behaviours” and ”real-time support”. Moreover, as for the value-creating factors before and after the actual interaction the reoccurring themes are ”competitive advantage”, ”high-impact customer insights”, ”classification”, ”practicality”, as well as ”reflection and development”. This essay provides knowledge that can help companies to further their understanding of how important customer service along with BDA and AI is and how they can support competitive advantage as well as customer loyalty. Since the thesis only focused on the investigation of Swedish organizations on the Shanghainese market, it would be of interest to continue further research on Swedish companies as China is seen to be in the forefront when it comes to utilizing these technologies.
|
118 |
Eine Systematisierung der Anwendungsmöglichkeiten und Potenziale von Big Data Analytics in InnovationsökosystemenKollwitz, Christoph 28 October 2024 (has links)
Im digitalen Zeitalter sind Innovationskraft und eine effiziente Adaption digitaler Technologien für Unternehmen entscheidend, um sich Wettbewerbsvorteile zu sichern. Der Einsatz digitaler Technologien für Innovation verspricht in diesem Zusammenhang nicht nur Produktivitätsvorteile, sondern steigert auch die Kundenzufriedenheit und macht Unternehmen agiler und widerstandsfähiger gegenüber Krisen. Eine zentrale Rolle spielt dabei die Anwendung von Big Data Analytics, jedoch bestehen derzeit erhebliche Forschungsbedarfe, um genauer zu ergründen, wie Big Data Analytics systematisch in Innovationsökosystemen genutzt werden können. Zum einen herrscht ein Mangel an Forschung über die strategischen Beiträge von Big Data Analytics für Innovation, insbesondere im Kontext des Zusammenwirkens verschiedener Akteure. Zum anderen liegt der Fokus bestehender Forschungsarbeiten oft nur auf Teilaspekten der Anwendung von Big Data Analytics und vernachlässigt umfassendere Betrachtungen, aus einer Ökosystem-Perspektive heraus. Für die Praxis liegen die primären Hürden dabei häufig nicht in der Technologie selbst, sondern in deren Adaption innerhalb der wertschöpfenden Strukturen von Unternehmen.
Diese Dissertation zielt darauf ab, diese Lücke zu schließen und untersucht die systematische Anwendung von Big Data Analytics in Innovationsökosystemen und nutzt dafür einen Design-Science-Research-Ansatz als übergeordnete Forschungsmethode. Im Dachbeitrag und in den Einzelbeiträgen des kumulativen Dissertationsvorhabens wird dafür gestaltungsorientierte Forschung angewendet, um theoretische Erkenntnisse direkt in die praktische Gestaltung und Entwicklung von Lösungen zu integrieren. Im Ergebnis liefert die Dissertation einen übergeordneten Ordnungsrahmen für die Anwendung von Big Data Analytics in Innovationsökosystemen, der die gesammelten Erkenntnisse aus dem Forschungsprojekt CODIFeY und den Einzelbeiträgen integriert. Damit trägt die Dissertation über den entwickelten Ordnungsrahmen und die IT-Artefakte der Einzelbeiträge dazu bei, ein besseres Verständnis für die strategische Nutzung digitaler Technologien zur Förderung von Innovation und Wettbewerbsvorteilen zu erreichen, was sowohl wissenschaftlich als auch praktisch einen Mehrwert bietet.:Danksagung i
Einzelbeiträge iii
Inhaltsverzeichnis iv
Abkürzungsverzeichnis x
Abbildungsverzeichnis xii
Tabellenverzeichnis xiv
Kurzzusammenfassung 1
Abstract 2
I. Dachbeitrag 3
1 Einleitung 3
1.1 Motivation 3
1.2 Problem- und Fragestellung 5
1.3 Zielstellung 8
1.4 Aufbau des Dachbeitrags 9
2 Forschungsansatz 11
2.1 Wissenschaftstheoretische Grundpositionierung 11
2.2 Forschungsmethode 12
2.2.1 Design Science Research als übergeordnetes Forschungsparadigma 12
2.2.2 Das Projekt Community-basierte Dienstleistungs-Innovation für e-Mobility 14
2.2.3 Aufbau des kumulativen Dissertationsvorhabens 17
3 Stand der Wissenschaft und Forschung 24
3.1 Big Data Analytics 24
3.2 Datengetriebene Innovation 25
3.3 Innovationsökosysteme aus der Perspektive der Service Dominant Logic 27
4 Gestaltung eines Ordnungsrahmens für die Anwendung von Big Data Analytics in Innovationsökosystemen 30
4.1 Das Modell eines Innovationsökosystems aus Sicht der Service Dominant Logic 30
4.2 Ableitung der Dimensionen des Ordnungsrahmens für die Anwendung von Big Data Analytics in Innovationsökosystemen 35
5 Eine Systematisierung von Anwendungsfällen von Big Data Analytics in Innovationsökosystemen 39
5.1 Big Data Analytics als Mittel für Innovation 39
5.2 Big Data Analytics als Ergebnis von Innovation 44
5.3 Demonstration & Evaluation des Ordnungsrahmens 50
6 Fazit 52
II. Research Papers of the Dissertation 55
Paper A – Capturing the Bigger Picture? Applying Text Analytics to Foster Open Innovation 55
A1 Introduction 57
A2 Background and Terminology 60
A2.1 Complexities of Sustainability-Oriented Innovation 60
A2.2 Open Innovation as an Instrument for Participation 62
A2.3 Sustainable-Oriented Innovation and Open Innovation 64
A2.4 Silent Stakeholders 67
A2.5 Research Focus: Text Analytics in Direct Search Methods for Sustainability-Oriented Innovation 69
A3 Action Research Study 72
A3.1 Description of the Action Research Cycle 72
A3.2 Diagnosing the Project Background 73
A3.3 Action Planning and Taking—Application of Text Analytics 77
A4 Results 82
A4.1 Findings from the Overall Discourse Analysis 82
A4.2 Findings from Zooming into Single Topics 84
A4.3 Applicability in the Innovation Process for the Label Development 85
A5 Discussion 87
A6 Implications and Conclusions 88
Paper B – What the Hack? – Towards a Taxonomy of Hackathons 92
B1 Introduction 93
B2 A Process-centric Perspective on Open Innovation and Hackathons 95
B3 Research Approach 97
B3.1 Taxonomy Development 97
B3.2 Literature Review 98
B4 A Taxonomy of Hackathons 101
B4.1 Overview of the Taxonomy 101
B4.2 Strategic Design Decisions 102
B4.3 Operational Design Decisions 104
B5 Discussion 107
B6 Conclusion 109
Paper C – Combining Open Innovation and Knowledge Management in Communities of Practice - An Analytics Driven Approach 110
C1 Introduction 111
C2 Foundations 113
C2.1 Knowledge Management and Innovation 113
C2.2 Communities of Practice 114
C2.3 Analytics domains 114
C3 Research Methodology 117
C4 Conceptual Framework for the Integration of Open Innovation and Knowledge Management 118
C4.1 Conceptual Data Model 119
C5 Implementation & Evaluation of a Pilot Project 122
C5.1 The Research Project CODIFeY 122
C5.2 Evaluation and Preliminary Findings 124
C6 Conclusions 126
Paper D – Entwicklung eines Analytics Framework für virtuelle Communities of Practice 127
D1 Einführung 128
D2 Grundlagen 130
D2.1 Communities of Practice 130
D2.2 Analytics 131
D2.3 Design eines Analytics Frameworks für Communities of Practice 132
D3 Demonstration und Evaluation im Projekt CODIFeY 136
D4 Fazit 138
Paper E – Teaching Data Driven Innovation – Facing a Challenge for Higher Education 139
E1 Introduction 140
E2 Foundations and Theoretical Underpinning 142
E2.1 Data Driven Innovation 142
E2.2 Teaching Data-Driven Innovation 142
E2.3 Pedagogical Approach 143
E3 Research Method 145
E3.1 General Morphological Analysis 145
E3.2 Data Collection and Empirical Analysis 146
E4 Design of the Morphological Box 148
E4.1 Teaching Method 148
E4.2 Course Setting 149
E4.3 Course Content 149
E4.4 Innovation Approach 150
E4.5 Morphological Box for Teaching Data Driven Innovation 151
E5 Teaching Cases 153
E5.1 Case A: Data Driven Value Generation for the Internet of Things 153
E5.2 Case B: Data Driven Innovation Project in the Field of E-mobility 154
E6 Conclusion 156
Paper F – Cross-Disciplinary Collaboration for Designing Data-Driven Products and Services 157
F1 Introduction 158
F2 Foundations and Theoretical Background 161
F2.1 Data Literacy as a Foundation for the Design of Data-Driven Product and Services 161
F2.2 Collaborative Processes and Knowledge Transfer 162
F2.3 Knowledge Boundaries 162
F2.4 Boundary Objects 163
F2.5 Boundary Objects for Collaboration Processes and Knowledge Integration 164
F3 Research Approach 166
F4 Design of the Data Vignette 169
F4.1 Thematic View 169
F4.2 Structural View 173
F5 Evaluation of the Artifact 178
F5.1 Artificial Evaluation Using the Guidelines of Modelling 178
F5.2 Application of the DV - A First Pilot 179
F6 Conclusion 182
Paper G – Towards the Development of a Typology of Big Data Analytics in Innovation Ecosystems 184
G1 Introduction 185
G2 Foundations 187
G2.1 The Role of Technology for Innovation Ecosystems 187
G2.2 Big Data Analytics in Innovation Ecosystems 188
G3 Research Approach 189
G4 Towards a Typology of Big Data Analytics in Innovation Ecosystems 190
G5 Further research 192
Paper H – Hackathons als Gestaltungswerkzeug für plattform-basierte digitale Ökosysteme 193
H1 Einleitung 194
H2 Grundlagen 196
H2.1 Plattform-basierte digitale Ökosysteme 196
H2.2 Hackathons als Gestaltungswerkzeug 197
H3 Forschungsmethode 199
H4 Hackathons für die Gestaltung plattform-basierter Ökosysteme 202
H4.1 Markt-orientierte Plattform-Hackathons 202
H4.2 Technologie-orientierte Plattform-Hackathons 204
H5 Fazit 206
Literaturverzeichnis xv
Anhang li
Anhang 1 li / In the digital age, the ability to innovate and the efficient adoption of digital technologies are crucial for companies to gain competitive advantages. The use of digital technologies for innovation promises not only productivity gains but also increases customer satisfaction and makes companies more agile and resilient to crises. The focus here is on the application of big data analytics, but there is currently still a considerable need for research to understand how big data analytics can be used systematically in innovation ecosystems. On the one hand, there is a lack of research on the strategic contributions of big data analytics to innovation, particularly in the context of the interaction of various actors. On the other hand, the focus of existing research often only addresses partial aspects of the application of big data analytics and neglects broader considerations from an ecosystem perspective. For practice, the primary hurdles often lie not in the technology itself but in its adaptation within the value-creating structures of companies.
This dissertation aims to close this gap and examines the systematic application of big data analytics in innovation ecosystems, using a design science research approach as the overarching research method. In the summary and in the individual papers of the cumulative dissertation project, design-oriented research is used to integrate theoretical insights directly into the practical design and development of solutions. As a result, the dissertation provides an overarching framework for the application of big data analytics in innovation ecosystems, integrating the insights gathered from the CODIFeY research project and the individual contributions. The dissertation on the developed framework and the IT artifacts of the individual contributions contributes to a better understanding of the strategic use of digital technologies to promote innovation and competitive advantages, which offers added value both scientifically and practically.:Danksagung i
Einzelbeiträge iii
Inhaltsverzeichnis iv
Abkürzungsverzeichnis x
Abbildungsverzeichnis xii
Tabellenverzeichnis xiv
Kurzzusammenfassung 1
Abstract 2
I. Dachbeitrag 3
1 Einleitung 3
1.1 Motivation 3
1.2 Problem- und Fragestellung 5
1.3 Zielstellung 8
1.4 Aufbau des Dachbeitrags 9
2 Forschungsansatz 11
2.1 Wissenschaftstheoretische Grundpositionierung 11
2.2 Forschungsmethode 12
2.2.1 Design Science Research als übergeordnetes Forschungsparadigma 12
2.2.2 Das Projekt Community-basierte Dienstleistungs-Innovation für e-Mobility 14
2.2.3 Aufbau des kumulativen Dissertationsvorhabens 17
3 Stand der Wissenschaft und Forschung 24
3.1 Big Data Analytics 24
3.2 Datengetriebene Innovation 25
3.3 Innovationsökosysteme aus der Perspektive der Service Dominant Logic 27
4 Gestaltung eines Ordnungsrahmens für die Anwendung von Big Data Analytics in Innovationsökosystemen 30
4.1 Das Modell eines Innovationsökosystems aus Sicht der Service Dominant Logic 30
4.2 Ableitung der Dimensionen des Ordnungsrahmens für die Anwendung von Big Data Analytics in Innovationsökosystemen 35
5 Eine Systematisierung von Anwendungsfällen von Big Data Analytics in Innovationsökosystemen 39
5.1 Big Data Analytics als Mittel für Innovation 39
5.2 Big Data Analytics als Ergebnis von Innovation 44
5.3 Demonstration & Evaluation des Ordnungsrahmens 50
6 Fazit 52
II. Research Papers of the Dissertation 55
Paper A – Capturing the Bigger Picture? Applying Text Analytics to Foster Open Innovation 55
A1 Introduction 57
A2 Background and Terminology 60
A2.1 Complexities of Sustainability-Oriented Innovation 60
A2.2 Open Innovation as an Instrument for Participation 62
A2.3 Sustainable-Oriented Innovation and Open Innovation 64
A2.4 Silent Stakeholders 67
A2.5 Research Focus: Text Analytics in Direct Search Methods for Sustainability-Oriented Innovation 69
A3 Action Research Study 72
A3.1 Description of the Action Research Cycle 72
A3.2 Diagnosing the Project Background 73
A3.3 Action Planning and Taking—Application of Text Analytics 77
A4 Results 82
A4.1 Findings from the Overall Discourse Analysis 82
A4.2 Findings from Zooming into Single Topics 84
A4.3 Applicability in the Innovation Process for the Label Development 85
A5 Discussion 87
A6 Implications and Conclusions 88
Paper B – What the Hack? – Towards a Taxonomy of Hackathons 92
B1 Introduction 93
B2 A Process-centric Perspective on Open Innovation and Hackathons 95
B3 Research Approach 97
B3.1 Taxonomy Development 97
B3.2 Literature Review 98
B4 A Taxonomy of Hackathons 101
B4.1 Overview of the Taxonomy 101
B4.2 Strategic Design Decisions 102
B4.3 Operational Design Decisions 104
B5 Discussion 107
B6 Conclusion 109
Paper C – Combining Open Innovation and Knowledge Management in Communities of Practice - An Analytics Driven Approach 110
C1 Introduction 111
C2 Foundations 113
C2.1 Knowledge Management and Innovation 113
C2.2 Communities of Practice 114
C2.3 Analytics domains 114
C3 Research Methodology 117
C4 Conceptual Framework for the Integration of Open Innovation and Knowledge Management 118
C4.1 Conceptual Data Model 119
C5 Implementation & Evaluation of a Pilot Project 122
C5.1 The Research Project CODIFeY 122
C5.2 Evaluation and Preliminary Findings 124
C6 Conclusions 126
Paper D – Entwicklung eines Analytics Framework für virtuelle Communities of Practice 127
D1 Einführung 128
D2 Grundlagen 130
D2.1 Communities of Practice 130
D2.2 Analytics 131
D2.3 Design eines Analytics Frameworks für Communities of Practice 132
D3 Demonstration und Evaluation im Projekt CODIFeY 136
D4 Fazit 138
Paper E – Teaching Data Driven Innovation – Facing a Challenge for Higher Education 139
E1 Introduction 140
E2 Foundations and Theoretical Underpinning 142
E2.1 Data Driven Innovation 142
E2.2 Teaching Data-Driven Innovation 142
E2.3 Pedagogical Approach 143
E3 Research Method 145
E3.1 General Morphological Analysis 145
E3.2 Data Collection and Empirical Analysis 146
E4 Design of the Morphological Box 148
E4.1 Teaching Method 148
E4.2 Course Setting 149
E4.3 Course Content 149
E4.4 Innovation Approach 150
E4.5 Morphological Box for Teaching Data Driven Innovation 151
E5 Teaching Cases 153
E5.1 Case A: Data Driven Value Generation for the Internet of Things 153
E5.2 Case B: Data Driven Innovation Project in the Field of E-mobility 154
E6 Conclusion 156
Paper F – Cross-Disciplinary Collaboration for Designing Data-Driven Products and Services 157
F1 Introduction 158
F2 Foundations and Theoretical Background 161
F2.1 Data Literacy as a Foundation for the Design of Data-Driven Product and Services 161
F2.2 Collaborative Processes and Knowledge Transfer 162
F2.3 Knowledge Boundaries 162
F2.4 Boundary Objects 163
F2.5 Boundary Objects for Collaboration Processes and Knowledge Integration 164
F3 Research Approach 166
F4 Design of the Data Vignette 169
F4.1 Thematic View 169
F4.2 Structural View 173
F5 Evaluation of the Artifact 178
F5.1 Artificial Evaluation Using the Guidelines of Modelling 178
F5.2 Application of the DV - A First Pilot 179
F6 Conclusion 182
Paper G – Towards the Development of a Typology of Big Data Analytics in Innovation Ecosystems 184
G1 Introduction 185
G2 Foundations 187
G2.1 The Role of Technology for Innovation Ecosystems 187
G2.2 Big Data Analytics in Innovation Ecosystems 188
G3 Research Approach 189
G4 Towards a Typology of Big Data Analytics in Innovation Ecosystems 190
G5 Further research 192
Paper H – Hackathons als Gestaltungswerkzeug für plattform-basierte digitale Ökosysteme 193
H1 Einleitung 194
H2 Grundlagen 196
H2.1 Plattform-basierte digitale Ökosysteme 196
H2.2 Hackathons als Gestaltungswerkzeug 197
H3 Forschungsmethode 199
H4 Hackathons für die Gestaltung plattform-basierter Ökosysteme 202
H4.1 Markt-orientierte Plattform-Hackathons 202
H4.2 Technologie-orientierte Plattform-Hackathons 204
H5 Fazit 206
Literaturverzeichnis xv
Anhang li
Anhang 1 li
|
119 |
Mental Health Readmissions Among Veterans: An Exploratory Endeavor Using Data MiningPrice, Lauren Emilie January 2015 (has links)
The purpose of this research is to inform the understanding of mental health readmissions by identifying associations between individual and environmental attributes and readmissions, with consideration of the impact of time-to-readmission within the Veterans Health Administration (VHA). Mental illness affects one in five adults in the United States (US). Mental health disorders are among the highest all-cause readmission diagnoses. The VHA is one of the largest national service providers of specialty mental health care. VHA's clinical practices and patient outcomes can be traced to US policy, and may be used to forecast national outcomes should these same policies be implemented nationwide. In this research, we applied three different data mining techniques to clinical data from over 200,000 patients across the VHA. Patients in this cohort consisted of adults receiving VHA inpatient mental health care between 2008 and 2013. The data mining techniques employed included k-means cluster analysis, association-rule mining, and decision tree analysis. K-means was used during cluster analysis to identify four statistically distinct clusters based on the combination of admission count, comorbidities, prescription (RX) count, age, casualty status, travel distance, and outpatient encounters. The association-rule mining analysis yielded multiple frequently occurring attribute values and sets consisting of service connection type, diagnoses/problems, and pharmaceuticals. Using the CHAID algorithm, the best decision tree model achieved 80% predictive accuracy when no readmissions were compared to 30-day readmissions. The strongest predictors of readmissions based on this algorithm were outpatient encounters, prescription count, VA Integrated Service Network (VISN), number of comorbidities, region, service connection, and period of service. Based on evidence from all three techniques, individuals with higher rates of system-wide utilization, more comorbidities, and longer medication lists are the most likely to have a 30-day readmission. These individuals represented 25% of this cohort, are sicker in general and may benefit from enrollment in a comprehensive nursing case management program.
|
120 |
Performance Tuning of Big Data Platform : Cassandra Case StudySathvik, Katam January 2016 (has links)
Usage of cloud-based storage systems gained a lot of prominence in fast few years. Every day millions of files are uploaded and downloaded from cloud storage. This data that cannot be handled by traditional databases and this is considered to be Big Data. New powerful platforms have been developed to store and organize big and unstructured data. These platforms are called Big Data systems. Some of the most popular big data platform are Mongo, Hadoop, and Cassandra. In this, we used Cassandra database management system because it is an open source platform that is developed in java. Cassandra has a masterless ring architecture. The data is replicated among all the nodes for fault tolerance. Unlike MySQL, Cassandra uses per-column basis technique to store data. Cassandra is a NoSQL database system, which can handle unstructured data. Most of Cassandra parameters are scalable and are easy to configure. Amazon provides cloud computing platform that helps a user to perform heavy computing tasks over remote hardware systems. This cloud computing platform is known as Amazon Web Services. AWS services also include database deployment and network management services, that have a non-complex user experience. In this document, a detailed explanation on Cassandra database deployment on AWS platform is explained followed by Cassandra performance tuning. In this study impact on read and write performance with change Cassandra parameters when deployed on Elastic Cloud Computing platform are investigated. The performance evaluation of a three node Cassandra cluster is done. With the knowledge of configuration parameters a three node, Cassandra database is performance tuned and a draft model is proposed. A cloud environment suitable for the experiment is created on AWS. A three node Cassandra database management system is deployed in cloud environment created. The performance of this three node architecture is evaluated and is tested with different configuration parameters. The configuration parameters are selected based on the Cassandra metrics behavior with the change in parameters. Selected parameters are changed and the performance difference is observed and analyzed. Using this analysis, a draft model is developed after performance tuning selected parameters. This draft model is tested with different workloads and compared with default Cassandra model. The change in the key cache memory and memTable parameters showed improvement in performance metrics. With increases of key cache size and save time period, read performance improved. This also showed effect on system metrics like increasing CPU load and disk through put, decreasing operation time and The change in memTable parameters showed the effect on write performance and disk space utilization. With increase in threshold value of memTable flush writer, disk through put increased and operation time decreased. The draft derived from performance evaluation has better write and read performance.
|
Page generated in 0.0985 seconds