• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 591
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1226
  • 1226
  • 181
  • 170
  • 163
  • 156
  • 150
  • 150
  • 149
  • 129
  • 112
  • 110
  • 110
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

The Influence of Technology on Organizational Performance: The Mediating Effects of Organizational Learning

Chegus, Matthew January 2018 (has links)
Organizations depend upon ever greater levels of information technology (IT), such as big data and analytics, a trend which shows no sign of abating. However, not all organizations have benefited from such IT investments, resulting in mixed perceptions on the value of IT. Organizations must be knowledgeable in order to properly utilize IT tools and be able to apply that knowledge to create unique competencies in order to gain sustained advantage from IT investments. Organizational learning (OL) has been proposed as the mechanism to accomplish this task. Existing empirical research demonstrates that OL may indeed act as a mediator for the effect of IT on organizational outcomes. Yet, these studies are not consistent in their conceptualizations of the relationships involved, nor in their definitions and measurement of OL. Many use a descriptive measure of OL despite theory suggesting that a normative measure may be more appropriate. This study aims to address these concerns in a Canadian setting by using structural equation modelling (SEM) to compare the effectiveness of descriptive and normative measures of OL as mediating variables in knowledge-intensive organizations. Survey results support OL as a mediator between IT and organizational performance in addition to normative measures of OL outperforming descriptive measures. Implications for research and practice are discussed.
542

An Ensemble Method for Large Scale Machine Learning with Hadoop MapReduce

Liu, Xuan January 2014 (has links)
We propose a new ensemble algorithm: the meta-boosting algorithm. This algorithm enables the original Adaboost algorithm to improve the decisions made by different WeakLearners utilizing the meta-learning approach. Better accuracy results are achieved since this algorithm reduces both bias and variance. However, higher accuracy also brings higher computational complexity, especially on big data. We then propose the parallelized meta-boosting algorithm: Parallelized-Meta-Learning (PML) using the MapReduce programming paradigm on Hadoop. The experimental results on the Amazon EC2 cloud computing infrastructure show that PML reduces the computation complexity enormously while retaining lower error rates than the results on a single computer. As we know MapReduce has its inherent weakness that it cannot directly support iterations in an algorithm, our approach is a win-win method, since it not only overcomes this weakness, but also secures good accuracy performance. The comparison between this approach and a contemporary algorithm AdaBoost.PL is also performed.
543

Industry 4.0 with a Lean perspective - Investigating IIoT platforms' possible influences on data driven Lean

De Vasconcelos Batalha, Alex, Parli, Andri Linard January 2017 (has links)
Purpose: To investigate possible connections between an Industrial Internet of Things (IIoT) system, such as Predix, and data driven Lean practises. The aim is to examine if an IIoT platform can improve existing practises of Lean, and if so, which Lean tools are most likely influenced and how this is.Design/Methodology: The paper follows a phenomenon-based research approach. The methodology contains of a mix of primary and secondary data. The primary data was obtained through “almost unstructured” interviews with experts, while the secondary data comprises of a comprehensive review of existing literature. Moreover, a model was developed to investigate the connections between the concepts of IIoT and Lean.Findings: Findings derived from expert interviews at General Electric (GE) in Uppsala have led to the conclusion that Predix fulfils the necessary requirements to be considered an IIoT platform. However, the positive effects of the platform on the selected Lean tools could not be found. Only in one instance improved Predix the effectiveness of a Lean tool. Overall, data analytic efforts are performed and let to better in-process control. However, these efforts were independent from the Lean efforts carried out. There was no increase in data collection or analytics due to the Lean initiative and Predix is not utilised for data collection, storage, or analysis. It appears that the pharmaceutical industry is fairly slow in adapting new technologies. Firstly, the high regulatory requirements inherent within the pharmaceutical industry limit the application of cutting edge technology by demanding strict in-process control and process documentation. Secondly, the sheer size of GE itself slows down the adoption of new technology. Lastly, the pragmatic approach of the top management to align the digital strategies of the various industries and thereof resulting allocation of resources to other more technologically demanding businesses hinders the use of Predix at GE in Uppsala.
544

Big Data v technológiách IBM / Big Data in technologies from IBM

Šoltýs, Matej January 2014 (has links)
This diploma thesis presents Big Data technologies and their possible use cases and applications. Theoretical part is initially focused on definition of term Big Data and afterwards is focused on Big Data technology, particularly on Hadoop framework. There are described principles of Hadoop, such as distributed storage and data processing, and its individual components. Furthermore are presented the largest vendors of Big Data technologies. At the end of this part of the thesis are described possible use cases of Big Data technologies and also some case studies. The practical part describes implementation of demo example of Big Data technologies and it is divided into two chapters. The first chapter of the practical part deals with conceptual design of demo example, used products and architecture of the solution. Afterwards, implementation of the demo example is described in the second chapter, from preparation of demo environment to creation of applications. Goals of this thesis are description and characteristics of Big Data, presentation of the largest vendors and their Big Data products, description of possible use cases of Big Data technologies and especially implementation of demo example in Big Data tools from IBM.
545

Nástroje pro Big Data Analytics / Big Data Analytics tools

Miloš, Marek January 2013 (has links)
The thesis covers the term for specific data analysis called Big Data. The thesis firstly defines the term Big Data and the need for its creation because of the rising need for deeper data processing and analysis tools and methods. The thesis also covers some of the technical aspects of Big Data tools, focusing on Apache Hadoop in detail. The later chapters contain Big Data market analysis and describe the biggest Big Data competitors and tools. The practical part of the thesis presents a way of using Apache Hadoop to perform data analysis with data from Twitter and the results are then visualized in Tableau.
546

Využití Big Data v bankovním prostředí / Application of Big Data in the banking environment

Dvorský, Bohuslav January 2016 (has links)
This thesis addresses the principles and technologies of Big Data and their usage in the banking environment. Its objective is to find business application scenarios for Big Data for purposes of delivering added value for the bank. Finding the scenarios have been achieved by studying literature and consultation with experts, they were also subsequently modeled by the author. Possibilities of application of these scenarios in the banking busi-ness environment were subsequently verified by the survey, which interviewed profession-als on issues relating to the found business scenarios. The thesis first explains the basic con-cepts and approaches of Big Data, the status of this technology compared to traditional technologies and issues of integration into the banking environment. After this theoretical beginning the business scenarios are found and modeled followed by the exploration and evaluation. Selected business scenarios are further verified for the purpose of determining the suitability or unsuitability for implementation using technologies and principles of Big Data. The contribution of this work is to find a real use of Big Data in banking, where most of the materials on this topic is very general and vague. This thesis verifies two business scenarios that can big a bank institution high added value if they are implemented with Big Data platform.
547

Big Data a jejích potenciál pro bankovní sektor / Big Data and its perspective for the banking

Firsov, Vitaly January 2013 (has links)
In this thesis, I want to explore present (y. 2012/2013) modern trends in Business Intelligence and focus specifically on the rapidly evolving and, in my (and not only) opinion, a very perspective area of analysis and use of Big Data in large enterprises. The first, introductory part contains general information and the formal conditions as aims of the work, on whom the work is oriented and where it could be used. Then there are described inputs and outputs, structure, methods to achieve the objectives, potential benefits and limitations in this part. Because at the same time I work as a data analyst in the largest bank Czech Republic, Czech Savings Bank, I focused on the using of Big Data in the banking, because I think, that it is possible to achieve great benefits from collecting and analyzing Big Data in this area. The thesis itself is divided into 3 parts (chapters 2, 3-4, 5). In the second chapter you will learn, how developed the area of BI, how it evolved historically, what is BI today and what future is predicted to the BI by the experts like the world famous and respected analyst firm Gartner. In the third chapter I will focus on Big Data itself, what this term means, how Big Data differs from traditional business information available from ERP, ECM, DMS and other enterprise systems. You will learn about ways to store and process this type of data, as well as about the existing and applicable technologies, focused on Big Data analysis. In the fourth chapter I focus on the using of Big Data in business, information in this chapter will reflect my personal views on the potential of Big Data, based on my experience during practice in Czech Savings Bank. The final part will summarize this thesis, assess, how I fulfilled the objectives defined at the beginning, and express my opinion on perspective of the trend of Big Data analytics, based to the analyzed during the writing this thesis information and knowledge.
548

An open health platform for the early detection of complex diseases: the case of breast cancer

MOHAMMADHASSAN MOHAMMADI, MAX January 2015 (has links)
Complex diseases such as cancer, cardiovascular diseases and diabetes are often diagnosed too late, which significantly impairs treatment options and, in turn, lowers patient’s survival rate drastically and increases the costs significantly. Moreover, the growth of medical data is faster than the ability of healthcare systems to utilize them. Almost 80% of medical data are unstructured, but they are clinically relevant. On the other hand, technological advancements have made it possible to create different  igital health solutions where healthcare and ICT meet. Also, some individuals have already started to measure their body function parameters, track their health status, research their symptoms and even intervene in treatment options which means a great deal of data is being produced and also indicates that patient-driven health care models are transforming how health care functions. These models include quantified self-tracking, consumer-personalized-medicine and health social networks. This research aims to present an open innovation digital health platform which creates value  y using the overlaps between healthcare, information technology and artificial intelligence. This platform could potentially be utilized for early detection of complex diseases by leveraging Big Data technology which could improve awareness by recognizing pooled symptoms of a specific disease. This would enable individuals to effortlessly and quantitatively track and become aware of changes in their health, and through a dialog with a doctor, achieve diagnosis at a significantly earlier stage. This thesis focuses on a case study of the platform for detecting breast cancer at a  ignificantly earlier stage. A qualitative research method is implemented through reviewing the literature, determining the knowledge gap, evaluating the need, performing market research, developing a conceptual prototype and presenting the open innovation platform. Finally, the value creation, applications and challenges of such platform are investigated, analysed and discussed based on the collected data from interviews and surveys. This study combines an explanatory and an analytical research approach, as it aims not only to describe the case, but also to explain the value creation for different stakeholders in the value chain. The findings indicate that there is an urgent need for early diagnosis of complex diseases such as breast cancer) and also handling direct and indirect consequences of late diagnosis. A significant outcome of this research is the conceptual prototype which was developed based on the general proposed concept through a customer development process. According to the conducted surveys, 95% of the cancer patients and 84% of the healthy individuals are willing to use the proposed platform. The results indicate that it can create significant values for patients, doctors, academic institutions, hospitals and even healthy individuals.
549

The democratisation of decision-makers in data-driven decision-making in a Big Data environment: The case of a financial services organisation in South Africa

Hassa, Ishmael January 2020 (has links)
Big Data refers to large unstructured datasets from multiple dissimilar sources. Using Big Data Analytics (BDA), insights can be gained that cannot be obtained by other means, allowing better decision-making. Big Data is disruptive, and because it is vast and complex, it is difficult to manage from technological, regulatory, and social perspectives. Big Data can provide decision-makers (knowledge workers) with bottom-up access to information for decision-making, thus providing potential benefits due to the democratisation of decision-makers in data-driven decision-making (DDD). The workforce is enabled to make better decisions, thereby improving participation and productivity. Enterprises that enable DDD are more successful than firms that are solely dependent on management's perception and intuition. Understanding the links between key concepts (Big Data, democratisation, and DDD) and decision-makers are important, because the use of Big Data is growing, the workforce is continually evolving, and effective decision-making based on Big Data insights is critical to a firm's competitiveness. This research investigates the influence of Big Data on the democratisation of decision-makers in data-driven decision-making. A Grounded Theory Method (GTM) was adopted due to the scarcity of literature around the interrelationships between the key concepts. An empirical study was undertaken, based on a case study of a large and leading financial services organisation in South Africa. The case study participants were diverse and represented three different departments. GTM facilitates emergence of novel theory that is grounded in empirical data. Theoretical elaboration of new concepts with existing literature permits the comparison of the emergent or substantive theory for similarities, differences, and uniqueness. By applying the GTM principles of constant comparison, theoretical sampling and emergence, decision-makers (people, knowledge workers) became the focal point of study rather than organisational decision-making processes or decision support systems. The concentrate of the thesis is therefore on the democratisation of decision-makers in a Big Data environment. The findings suggest that the influence of Big Data on the democratisation of the decisionmaker in relation to DDD is dependent on the completeness and quality of the Information Systems (IS) artefact. The IS artefact results from, and is comprised of, information that is extracted from Big Data through Big Data Analytics (BDA) and decision-making indicators (DMI). DMI are contributions of valuable decision-making parameters by actors that include Big Data, People, The Organisation, and Organisational Structures. DMI is an aspect of knowledge management as it contains both the story behind the decision and the knowledge that was used to decide. The IS artefact is intended to provide a better and more complete picture of the decision-making landscape, which adds to the confidence of decision-makers and promotes participation in DDD which, in turn, exemplifies democratisation of the decisionmaker. Therefore, the main theoretical contribution is that the democratisation of the decisionmaker in DDD is based on the completeness of the IS artefact, which is assessed within the democratisation inflection point (DIP). The DIP is the point at which the decision-maker evaluates the IS artefact. When the IS artefact is complete, meaning that all the parameters that are pertinent to a decision for specific information is available, then democratisation of the decision-maker is realised. When the IS artefact is incomplete, meaning that all the parameters that are pertinent to a decision for specific information is unavailable, then democratisation of the decision-maker breaks down. The research contributes new knowledge in the form of a substantive theory, grounded in empirical findings, to the academic field of IS. The IS artefact constitutes a contribution to practice: it highlights the importance of interrelationships and contributions of DMI by actors within an organisation, based on information extracted through BDA, that promote decisionmaker confidence and participation in DDD. DMI, within the IS artefact, are critical to decision-making, the lack of which has implications for the democratisation of the decisionmaker in DDD. The study has uncovered the need to further investigate the extent of each actor's contribution (agency) to DMI, the implications of generational characteristics on adoption and use of Big Data and an in-depth understanding of the relationships between individual differences, Big Data and decision-making. Research is also recommended to better explain democratisation as it relates to data-driven decision-making processes.
550

Communicating big data in the healthcare industry

Castaño Martínez, María, Johnson, Elizabeth January 2020 (has links)
In recent years nearly every aspect of how we function as a society has transformed from analogue to digital. This has spurred extraordinary change and acted as a catalyst for technology innovation, as well as big data generation. Big data is characterized by its constantly growing volume, wide variety, high velocity, and powerful veracity. With the emergence of COVID-19, the global pandemic has demonstrated the profound impact, and often dangerous consequences, when communicating health information derived from data. Healthcare companies have access to enormous data assets, yet communicating information from their data sources is complex as they also operate in one of the most highly regulated business environments where data privacy and legal requirements vary significantly from one country to another. The purpose of this study is to understand how global healthcare companies communicate information derived from data to their internal and external audiences. The research proposes a model for how marketing communications, public relations, and internal communications practitioners can address the challenges of utilizing data in communications in order to advance organizational priorities and achieve business goals. The conceptual framework is based on a closed-loop communication flow and includes an encoding process specialized for incorporating big data into communications. The results of the findings reveal tactical communication strategies, as well as organizational and managerial practices that can position practitioners best for communicating big data. The study concludes by proposing recommendations for future research, particularly from interdisciplinary scholars, to address the research gaps.

Page generated in 0.1293 seconds