• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 592
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1228
  • 1228
  • 181
  • 170
  • 163
  • 157
  • 151
  • 150
  • 150
  • 130
  • 113
  • 111
  • 111
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
571

Relationship Between Perceived Usefulness, Ease of Use, and Acceptance of Business Intelligence Systems

Sandema-Sombe, Christina Ndiwa 01 January 2019 (has links)
In retail, the explosion of data sources and data has provided incentive to invest in information systems (IS), which enable leaders to understand the market and make timely decisions to improve performance. Given that users’ perceptions of IS affects their use of IS, understanding the factors influencing user acceptance is critical to acquiring an effective business intelligence system (BIS) for an organization. Grounded in the technology acceptance model theory, the purpose of this correlational study was to examine the relationship between perceived usefulness (PU), perceived ease of use (PEOU), and user acceptance of business intelligence systems (BIS) in retail organizations. A 9-question survey was used to collect data from end-users of BIS in strategic managerial positions from retail organizations in the eastern United States who reported using BIS within the past 5 years. A total of 106 complete survey responses were collected and analyzed using multiple linear regression and Pearson’s product-moment correlation. The results of the multiple linear regression indicated the model’s ability to predict user acceptance, F(2,103) = 21.903, p < .000, R2 = 0.298. In addition, PU was a statistically significant predictor of user acceptance (t = -3.947, p = .000), which decreased with time as shown by the results from Pearson’s product-moment correlation, r = -.540, n = 106, p < .01. The implications of this study for positive social change include the potential for business leaders to leverage BIS in addressing the underlying causes of social and economic challenges in the communities they serve.
572

Utilizing big data from products in use to create value : A case study of Bosch Thermoteknik AB

Kokoneshi, Renisa January 2019 (has links)
New knowledge and insights are generated when big data is collected and processed. Traditionally, business generated data internally from operations and transactions across the value chain such as sales, customer service visits, orders, interaction with supplier as well as data gathered from research, surveys or other sources externally. Today, with improved software and connectivity, the products become smarter which makes it easier to collect and generate large amount of real-time data.  The fast growing volumes and varieties of big data bring many challenges for companies on how to store, manage, utilize and create value from these data.     This thesis represents a case study of a large heat pump manufacturer, Bosch Thermoteknik AB, situated in Tranås, Sweden. Bosch Thermoteknik AB has started to collect data in real time from several heat pumps connected to the internet. These data are currently used during development phase of the products and occasionally to support installers during maintenance services. The company understands the potential benefits resulting from big data and would like to further deepen their knowledge on how to utilize big data to create value. One of the company’s goals is to identify how big data can reduce maintenance costs and improve maintenance approaches.  The purpose of this study is to provide knowledge on how to obtain insights and create value by collecting and analyzing big data from smart connected products. A focus point will be on improving maintenance approaches and reducing maintenance costs. This study shows that if companies create capabilities to perform data analytics, insights obtained from big data analytics could be used to create business value targeting many areas such as: customer experience, product and service innovation, organization performance improvement as well as improving business image and reputation. Creating capabilities requires deploying many resources other than big data, including a technology infrastructure, integrating and storing a vast amount of data, implementing data-driven culture and having talented employees with business, technical and analytics knowledge and skills. Insights obtained through analytics of big data could provide a better understanding of problems, identifying the root causes and reacting faster to problems. Additionally, failures could be prevented and predicted in the future. This could result in the overall improvement of maintenance approaches, products and services.
573

USING SEARCH QUERY DATA TO PREDICT THE GENERAL ELECTION: CAN GOOGLE TRENDS HELP PREDICT THE SWEDISH GENERAL ELECTION?

Sjövill, Rasmus January 2020 (has links)
The 2018 Swedish general election saw the largest collective polling error so far in the twenty-first century. As in most other advanced democracies Swedish pollsters have faced extensive challenges in the form of declining response rates. To deal with this problem a new method based on search query data is proposed. This thesis predicts the Swedish general election using Google Trends data by introducing three models based on the assumption, that during the pre-election period actual voters of one party are searching for that party on Google. The results indicate that a model that exploits information about searches close to the election is in general a good predictor. However, I argue that this has more to do with the underlying weight this model is based on and little to do with Google Trends data. However, more analysis needs to be done before any direct conclusion, about the use of search query data in election prediction, can be drawn.
574

Scalable Dynamic Big Data Geovisualization With Spatial Data Structure

Siqi Gu (8779961) 29 April 2020 (has links)
Comparing to traditional cartography, big data geographic information processing is not a simple task at all, it requires special methods and methods. When existing geovisualization systems face millions of data, the zoom function and the dynamical data adding function usually cannot be satisfied at the same time. This research classify the existing methods of geovisualization, then analyze its functions and bottlenecks, analyze its applicability in the big data environment, and proposes a method that combines spatial data structure and iterative calculation on demand. It also proves that this method can effectively balance the performance of scaling and new data, and it is significantly better than the existing library in the time consumption of new data and scaling<br>
575

Factors that affect digital transformation in the telecommunication industry

Pretorius, Daniel Arnoldus January 2019 (has links)
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2019 / The internet, mobile communication, social media, and other digital services have integrated so much into our daily lives and businesses alike. Companies facing digital transformation experience this as exceptionally challenging. While there are several studies that state the importance of digital transformation and how it influences current and future businesses, there is little academic literature available on factors that affect the success or failure of digital transformation in companies. It is unclear what factors affect digital transformation in an established telecommunications company. The aim of this study was therefore to explore the factors that affect digital transformation in a telecommunications company in South Africa, and to what extent. One primary research question was posed, namely: “What factors affect digital transformation in a telecommunications company in South Africa?” To answer the question, a study was conducted at a telecommunications company in South Africa. The researcher adopted a subjective ontological and interpretivist epistemological stance, as the data collected from the participants’ perspective were interpreted to make claims about the truth, and because there are many ways of looking at the phenomena. An inductive approach was selected to enable the researcher to gain in-depth insight into the views and perspective of factors that influence digital transformation in the specific company. The explorative research strategy was used to gain an understanding of the underlying views, reasons, opinions, and thoughts of the 15 participants by means of semi-structured interviews. The participants were made aware that they do not have to answer any question if they are uncomfortable, and they could withdraw their answers at any time. The data collected were transcribed, summarised, and categorised to provide a clear understanding of the data. For this study, 36 findings were identified. From this research, it was inter alia concluded that successful digital transformation of companies depends on how Management drives digital transformation, and the benefits of new digital technologies should be carefully considered when planning to implement digital transformation.
576

Budoucnost historického a kulturního dědictví: aplikace big data v digitálních humanitních vědách / The Future of Cultural and Historical Heritage: Application of Big Data in the Digital Humanities

Hryshyna, Kateryna January 2020 (has links)
This diploma thesis deals with the topic of preserving cultural heritage in the context of digital humanities. The topic of this work will be the presentation of modern tools and technologies aimed at preserving cultural memory in Europe. The aim of this work will be to map the benefits and potential risks of digital infrastructures CESSDA, ARIADNE PLUS and DARIAH-EU for the preservation of cultural heritage. The work will be divided into three parts - the theoretical part will briefly introduce the topic of digital humanities and their studies, the concept of big data and the transformation of digital archives in the context of digital humanities. The analytical part will focus on mapping the current situation of cultural heritage preservation in Europe. The last practical part will offer an analysis of the advantages and disadvantages of digital infrastructures ARIADNE PLUS, CESSDA and DARIAH-EU both for the preservation of cultural heritage and for research in the social sciences and humanities.
577

JOB SCHEDULING FOR STREAMING APPLICATIONS IN HETEROGENEOUS DISTRIBUTED PROCESSING SYSTEMS

Al-Sinayyid, Ali 01 December 2020 (has links)
The colossal amounts of data generated daily are increasing exponentially at a never-before-seen pace. A variety of applications—including stock trading, banking systems, health-care, Internet of Things (IoT), and social media networks, among others—have created an unprecedented volume of real-time stream data estimated to reach billions of terabytes in the near future. As a result, we are currently living in the so-called Big Data era and witnessing a transition to the so-called IoT era. Enterprises and organizations are tackling the challenge of interpreting the enormous amount of raw data streams to achieve an improved understanding of data, and thus make efficient and well-informed decisions (i.e., data-driven decisions). Researchers have designed distributed data stream processing systems that can directly process data in near real-time. To extract valuable information from raw data streams, analysts need to create and implement data stream processing applications structured as a directed acyclic graphs (DAG). The infrastructure of distributed data stream processing systems, as well as the various requirements of stream applications, impose new challenges. Cluster heterogeneity in a distributed environment results in different cluster resources for task execution and data transmission, which make the optimal scheduling algorithms an NP-complete problem. Scheduling streaming applications plays a key role in optimizing system performance, particularly in maximizing the frame-rate, or how many instances of data sets can be processed per unit of time. The scheduling algorithm must consider data locality, resource heterogeneity, and communicational and computational latencies. The latencies associated with the bottleneck from computation or transmission need to be minimized when mapped to the heterogeneous and distributed cluster resources. Recent work on task scheduling for distributed data stream processing systems has a number of limitations. Most of the current schedulers are not designed to manage heterogeneous clusters. They also lack the ability to consider both task and machine characteristics in scheduling decisions. Furthermore, current default schedulers do not allow the user to control data locality aspects in application deployment.In this thesis, we investigate the problem of scheduling streaming applications on a heterogeneous cluster environment and develop the maximum throughput scheduler algorithm (MT-Scheduler) for streaming applications. The proposed algorithm uses a dynamic programming technique to efficiently map the application topology onto a heterogeneous distributed system based on computing and data transfer requirements, while also taking into account the capacity of underlying cluster resources. The proposed approach maximizes the system throughput by identifying and minimizing the time incurred at the computing/transfer bottleneck. The MT-Scheduler supports scheduling applications that are structured as a DAG, such as Amazon Timestream, Google Millwheel, and Twitter Heron. We conducted experiments using three Storm microbenchmark topologies in both simulated and real Apache Storm environments. To evaluate performance, we compared the proposed MT-Scheduler with the simulated round-robin and the default Storm scheduler algorithms. The results indicated that the MT-Scheduler outperforms the default round-robin approach in terms of both average system latency and throughput.
578

Business Intelligence in the Hotel Industry

Shahini, Rei January 2020 (has links)
Applications of artificial intelligence (AI) in hospitality and accommodation have taken an enormous percentage of service-provision, helping automate most of the processes involved such as booking and purchasing, improving the guest experience, tracking of guest preferences and interests, etc. The aim of the study is to understand the roles, benefits and issues with the improvement of business intelligence (BI) in hospitality. This research is purposed to discover the applications of BI in hotel booking and accommodation. The investigation focuses on hotel guest experience, business operations and guest satisfaction. The research also shows how acquiring proper BI is supported by implementing a dynamic technology framework integrated with AI and a big data resource. In such a system, the intensive collection of customer data combined with an improved technology standard is achievable using AI. The research employs a qualitative approach for data discovery and collection. A thematic analysis helps generate proper findings that indicate an improvement in the entire hospitality service delivery system as well as customer satisfaction. In this thesis, there are examined various subsets of BI in tourism. The assessment analyzes competition arising from the application of these technologies. The study also shows the importance and application of harnessing data to gather insights about guest interests and preferences through the establishment of well-developed BI. Insights enable the customization of hotel services and products for individual guests. There is a considerable improvement in guest services and guest information collection, which is achieved through the creation of guest profiles. The research performs a discussion on the incorporation of AI and big data among other sub-components in creating diversified BI and seeks to identify the need for current BI applications in the hotel industry.
579

An experimental study of memory management in Rust programming for big data processing

Okazaki, Shinsaku 10 December 2020 (has links)
Planning optimized memory management is critical for Big Data analysis tools to perform faster runtime and efficient use of computation resources. Modern Big Data analysis tools use application languages that abstract their memory management so that developers do not have to pay extreme attention to memory management strategies. Many existing modern cloud-based data processing systems such as Hadoop, Spark or Flink use Java Virtual Machine (JVM) and take full advantage of features such as automated memory management in JVM including Garbage Collection (GC) which may lead to a significant overhead. Dataflow-based systems like Spark allow programmers to define complex objects in a host language like Java to manipulate and transfer tremendous amount of data. System languages like C++ or Rust seem to be a better choice to develop systems for Big Data processing because they do not relay on JVM. By using a system language, a developer has full control on the memory management. We found Rust programming language to be a good candidate due to its ability to write memory-safe and fearless concurrent codes with its concept of memory ownership and borrowing. Rust programming language includes many possible strategies to optimize memory management for Big Data processing including a selection of different variable types, use of Reference Counting, and multithreading with Atomic Reference Counting. In this thesis, we conducted an experimental study to assess how much these different memory management strategies differ regarding overall runtime performance. Our experiments focus on complex object manipulation and common Big Data processing patterns with various memory man- agement. Our experimental results indicate a significant difference among these different memory strategies regarding data processing performance.
580

ZipThru: A software architecture that exploits Zipfian skew in datasets for accelerating Big Data analysis

Ejebagom J Ojogbo (9529172) 16 December 2020 (has links)
<div>In the past decade, Big Data analysis has become a central part of many industries including entertainment, social networking, and online commerce. MapReduce, pioneered by Google, is a popular programming model for Big Data analysis, famous for its easy programmability due to automatic data partitioning, fault tolerance, and high performance. Majority of MapReduce workloads are summarizations, where the final output is a per-key ``reduced" version of the input, highlighting a shared property of each key in the input dataset.</div><div><br></div><div>While MapReduce was originally proposed for massive data analyses on networked clusters, the model is also applicable to datasets small enough to be analyzed on a single server. In this single-server context the intermediate tuple state generated by mappers is saved to memory, and only after all Map tasks have finished are reducers allowed to process it. This Map-then-Reduce sequential mode of execution leads to distant reuse of the intermediate state, resulting in poor locality for memory accesses. In addition the size of the intermediate state is often too large to fit in the on-chip caches, leading to numerous cache misses as the state grows during execution, further degrading performance. It is well known, however, that many large datasets used in these workloads possess a Zipfian/Power Law skew, where a minority of keys (e.g., 10\%) appear in a majority of tuples/records (e.g., 70\%). </div><div><br></div><div>I propose ZipThru, a novel MapReduce software architecture that exploits this skew to keep the tuples for the popular keys on-chip, processing them on the fly and thus improving reuse of their intermediate state and curtailing off-chip misses. ZipThru achieves this using four key mechanisms: 1) Concurrent execution of both Map and Reduce phases; 2) Holding only the small, reduced state of the minority of popular keys on-chip during execution; 3) Using a lookup table built from pre-processing a subset of the input to distinguish between popular and unpopular keys; and 4) Load balancing the concurrently executing Map and Reduce phases to efficiently share on-chip resources. </div><div><br></div><div>Evaluations using Phoenix, a shared-memory MapReduce implementation, on 16- and 32-core servers reveal that ZipThru incurs 72\% fewer cache misses on average over traditional MapReduce while achieving average speedups of 2.75x and 1.73x on both machines respectively.</div>

Page generated in 0.4761 seconds