• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 80
  • 80
  • 80
  • 21
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The emergence of Big Data and Auditors' Perception : A comparative study on India and Bangladesh

Rahnuma, Zenat January 2023 (has links)
Abstract: Title: The emergence of Big Data and Auditors' Perception (A comparative study on India and Bangladesh) Aim: The aim of the study is to explore and compare the perception of auditors in India and Bangladesh towards the implementation of big data analytics in audit. Method: In this study a qualitative method has been applied using semi-structured interviews. The study is an exploratory research and has been analysed thematically. Results and conclusions: Employing the Technology Acceptance Model (TAM) as a conceptual framework, this study conducted a comparative analysis of auditors' perceptions, emphasizing the components of perceived usefulness, perceived ease of use, intention to adopt, and their interactions. The results of the study show that the intention to adopt big data analytics tools emerges as a shared aspiration among auditors from both India and Bangladesh.
32

Marco teórico y estudios de caso para la mejora en la optimización de la red de agencias de una empresa bancaria en Lima Metropolitana

Briones Gallegos, Fernando David 15 June 2021 (has links)
La investigación toma sustento debido al proceso importante de transformación digital que están afrontando los bancos, lo cual implica una nueva estrategia de canales y educar a sus clientes a usar más aplicativos digitales. Esto es clave si estas organizaciones desean mantener una supervivencia en el mediano plazo debido a que hoy están saliendo nuevos competidores en el mercado. El objetivo de la investigación es identificar las fuentes teóricas que ayuden a plantear la mejor solución para la problemática identificada al momento de realizar un diagnóstico de los procesos en el Banco ABC: mejora del proceso de optimización de canales físicos usando marketing analytics y minería de datos. Como sustentos teóricos, toma como base algoritmos de machine learning de clustering relacionados a los modelos k-means y regresión multivariada. El procedimiento consiste en investigar en distintas fuentes académicas herramientas de diagnóstico de procesos, herramientas de la propuesta de mejora como conceptos de marketing analytics y minería de datos o algoritmos como regresiones y clustering. Finalmente, se analiza 3 casos que plantean problemáticas similares a la que se desea abordar en distintas industrias para poder comparar metodologías a seguir. Como resultados, se pudo consolidar una lista completa de conceptos sólidos del marco teórico que ayuden a sustentar la solución planteada, además, en los 3 casos planteados se identificó que existe un procedimiento claro de cómo abordar un problema de clustering. Como conclusión principal, se resume en que hoy existe mucha información sobre estos temas y casos prácticos como los que se abordan para poder sustentar cualquier propuesta de marketing analytics para una problemática en especifica. Se sugiere a los lectores manejar conceptos teóricos previos de estadística aplicada y algoritmos más sencillos como regresiones lineales para que pueda ser fácilmente entendible la teoría abordada al momento de buscar información de este tipo.
33

Exploring the Landscape of Big Data Analytics Through Domain-Aware Algorithm Design

Dash, Sajal 20 August 2020 (has links)
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. High volume and velocity of the data warrant a large amount of storage, memory, and compute power while a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. In this thesis, we present our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental arrival of data through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We present Claret, a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool, to demonstrate the application of the first guideline. It combines algorithmic concepts extended from the stochastic force-based multi-dimensional scaling (SF-MDS) and Glimmer. Claret computes approximate weighted Euclidean distances by combining a novel data mapping called stretching and Johnson Lindestrauss' lemma to reduce the complexity of WMDS from O(f(n)d) to O(f(n) log d). In demonstrating the second guideline, we map the problem of identifying multi-hit combinations of genetic mutations responsible for cancers to weighted set cover (WSC) problem by leveraging the semantics of cancer genomic data obtained from cancer biology. Solving the mapped WSC with an approximate algorithm, we identified a set of multi-hit combinations that differentiate between tumor and normal tissue samples. To identify three- and four-hits, which require orders of magnitude larger computational power, we have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. Developing new statistics to combine search results over time makes incremental analysis feasible. iBLAST performs (1+δ)/δ times faster than NCBI BLAST, where δ represents the fraction of database growth. We also explored various approaches to mitigate catastrophic forgetting in incremental training of deep learning models. / Doctor of Philosophy / Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. Here volume represents the data's size, variety represents various sources and formats of the data, and velocity represents the data arrival rate. High volume and velocity of the data warrant a large amount of storage, memory, and computational power. In contrast, a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. This thesis presents our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric (pair-wise distance and distribution-related) and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental data arrival through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We demonstrate the application of the first guideline through the design and development of Claret. Claret is a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool that can reduce the dimension of high-dimensional data points. In demonstrating the second guideline, we identify combinations of cancer-causing gene mutations by mapping the problem to a well known computational problem known as the weighted set cover (WSC) problem. We have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer to solve the problem in less than two hours instead of an estimated hundred years. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. This analysis was made possible by developing new statistics to combine search results over time. We also explored various approaches to mitigate the catastrophic forgetting of deep learning models, where a model forgets to perform machine learning tasks efficiently on older data in a streaming setting.
34

Big data-driven fuzzy cognitive map for prioritising IT service procurement in the public sector

Choi, Y., Lee, Habin, Irani, Zahir 17 August 2016 (has links)
Yes / The prevalence of big data is starting to spread across the public and private sectors however, an impediment to its widespread adoption orientates around a lack of appropriate big data analytics (BDA) and resulting skills to exploit the full potential of big data availability. In this paper, we propose a novel BDA to contribute towards this void, using a fuzzy cognitive map (FCM) approach that will enhance decision-making thus prioritising IT service procurement in the public sector. This is achieved through the development of decision models that capture the strengths of both data analytics and the established intuitive qualitative approach. By taking advantages of both data analytics and FCM, the proposed approach captures the strength of data-driven decision-making and intuitive model-driven decision modelling. This approach is then validated through a decision-making case regarding IT service procurement in public sector, which is the fundamental step of IT infrastructure supply for publics in a regional government in the Russia federation. The analysis result for the given decision-making problem is then evaluated by decision makers and e-government expertise to confirm the applicability of the proposed BDA. In doing so, demonstrating the value of this approach in contributing towards robust public decision-making regarding IT service procurement. / EU FP7 project Policy Compass (Project No. 612133)
35

Self-building Artificial Intelligence and machine learning to empower big data analytics in smart cities

Alahakoon, D., Nawaratne, R., Xu, Y., De Silva, D., Sivarajah, Uthayasankar, Gupta, B. 19 August 2020 (has links)
Yes / The emerging information revolution makes it necessary to manage vast amounts of unstructured data rapidly. As the world is increasingly populated by IoT devices and sensors that can sense their surroundings and communicate with each other, a digital environment has been created with vast volumes of volatile and diverse data. Traditional AI and machine learning techniques designed for deterministic situations are not suitable for such environments. With a large number of parameters required by each device in this digital environment, it is desirable that the AI is able to be adaptive and self-build (i.e. self-structure, self-configure, self-learn), rather than be structurally and parameter-wise pre-defined. This study explores the benefits of self-building AI and machine learning with unsupervised learning for empowering big data analytics for smart city environments. By using the growing self-organizing map, a new suite of self-building AI is proposed. The self-building AI overcomes the limitations of traditional AI and enables data processing in dynamic smart city environments. With cloud computing platforms, the selfbuilding AI can integrate the data analytics applications that currently work in silos. The new paradigm of the self-building AI and its value are demonstrated using the IoT, video surveillance, and action recognition applications. / Supported by the Data to Decisions Cooperative Research Centre (D2D CRC) as part of their analytics and decision support program and a La Trobe University Postgraduate Research Scholarship.
36

Assessing the impact of big data analytics on decision-making processes, forecasting, and performance of a firm

Chatterjee, S., Chaudhuri, R., Gupta, S., Sivarajah, Uthayasankar, Bag, S. 03 September 2023 (has links)
Yes / There are various kinds of applications of BDA in the firms. Not many studies are there which deal with the impact of BDA towards issues like forecasting, decision-making, as well as performance of the firms simultaneously. So, there exists a gap in the research. In such a background, this study aims at examining the impacts of BDA on the process of decision-making, forecasting, as well as firm performance. Using resource-based view (RBV) as well as dynamic capability view (DCV) and related research studies, a research model was proposed conceptually. This conceptual model was validated taking help of PLS-SEM approach considering 366 respondents from Indian firms. This study has highlighted that smart decision making and accurate forecasting process can be achieved by using BDA. This research has demonstrated that there is a considerable influence of adoption of BDA on decision making process, forecasting process, as well as overall firm performance. However, the present study suffers from the fact that the study results depend on the cross-sectional data which could invite defects of causality and endogeneity bias. The present research work also found that there is no impact of different control variables on the firm's performance.
37

Challenges in using a Mixed-Method approach to explore the relationship between big data analytics capabilities and market performance

Olabode, Oluwaseun E., Boso, N., Hultman, M., Leonidou, C.N. 19 September 2023 (has links)
No / This case study is based on a research study that examined the relationship between big data analytics capability and market performance. The study investigated the intervening role of disruptive business models and the contingency role of competitive intensity on the relationship between big data analytics capability and market performance using both qualitative and quantitative methods. This case-study will focus on the qualitative and quantitative methods utilised including NVivo and IBM SPSS to conduct qualitative analysis and quantitative analysis. You will learn the factors to consider when conducting a mixed-methods study and develop the ability to apply similar analytical techniques to your research context.
38

Data Integration Methodologies and Services for Evaluation and Forecasting of Epidemics

Deodhar, Suruchi 31 May 2016 (has links)
Most epidemiological systems described in the literature are built for evaluation and analysis of specific diseases, such as Influenza-like-illness. The modeling environments that support these systems are implemented for specific diseases and epidemiological models. Hence they are not reusable or extendable. This thesis focuses on the design and development of an integrated analytical environment with flexible data integration methodologies and multi-level web services for evaluation and forecasting of various epidemics in different regions of the world. The environment supports analysis of epidemics based on any combination of disease, surveillance sources, epidemiological models, geographic regions and demographic factors. The environment also supports evaluation and forecasting of epidemics when various policy-level and behavioral interventions are applied, that may inhibit the spread of an epidemic. First, we describe data integration methodologies and schema design, for flexible experiment design, storage and query retrieval mechanisms related to large scale epidemic data. We describe novel techniques for data transformation, optimization, pre-computation and automation that enable flexibility, extendibility and efficiency required in different categories of query processing. Second, we describe the design and engineering of adaptable middleware platforms based on service-oriented paradigms for interactive workflow, communication, and decoupled integration. This supports large-scale multi-user applications with provision for online analysis of interventions as well as analytical processing of forecast computations. Using a service-oriented architecture, we have provided a platform-as-a-service representation for evaluation and forecasting of epidemics. We demonstrate the applicability of our integrated environment through development of the applications, DISIMS and EpiCaster. DISIMS is an interactive web-based system for evaluating the effects of dynamic intervention strategies on epidemic propagation. EpiCaster is a situation assessment and forecasting tool for projecting the state of evolving epidemics such as flu and Ebola in different regions of the world. We discuss how our platform uses existing technologies to solve a novel problem in epidemiology, and provides a unique solution on which different applications can be built for analyzing epidemic containment strategies. / Ph. D.
39

A study on big data analytics and innovation: From technological and business cycle perspectives

Sivarajah, Uthayasankar, Kumar, S., Kumar, V., Chatterjee, S., Li, Jing 10 March 2024 (has links)
Yes / In today’s rapidly changing business landscape, organizations increasingly invest in different technologies to enhance their innovation capabilities. Among the technological investment, a notable development is the applications of big data analytics (BDA), which plays a pivotal role in supporting firms’ decision-making processes. Big data technologies are important factors that could help both exploratory and exploitative innovation, which could affect the efforts to combat climate change and ease the shift to green energy. However, studies that comprehensively examine BDA’s impact on innovation capability and technological cycle remain scarce. This study therefore investigates the impact of BDA on innovation capability, technological cycle, and firm performance. It develops a conceptual model, validated using CB-SEM, through responses from 356 firms. It is found that both innovation capability and firm performance are significantly influenced by big data technology. This study highlights that BDA helps to address the pressing challenges of climate change mitigation and the transition to cleaner and more sustainable energy sources. However, our results are based on managerial perceptions in a single country. To enhance generalizability, future studies could employ a more objective approach and explore different contexts. Multidimensional constructs, moderating factors, and rival models could also be considered in future studies.
40

Integrated Predictive Modeling and Analytics for Crisis Management

Alhamadani, Abdulaziz Abdulrhman 15 May 2024 (has links)
The surge in the application of big data and predictive analytics in fields of crisis management, such as pandemics and epidemics, highlights the vital need for advanced research in these areas, particularly in the wake of the COVID-19 pandemic. Traditional methods, which typically rely on historical data to forecast future trends, fall short in addressing the complex and ever-changing nature of challenges like pandemics and public health crises. This inadequacy is further underscored by the pandemic's significant impact on various sectors, notably healthcare, government, and the hotel industry. Current models often overlook key factors such as static spatial elements, socioeconomic conditions, and the wealth of data available from social media, which are crucial for a comprehensive understanding and effective response to these multifaceted crises. This thesis employs spatial forecasting and predictive analytics to address crisis management in several distinct but interrelated contexts: the COVID-19 pandemic, the opioid crisis, and the impact of the pandemic on the hotel industry. The first part of the study focuses on using big data analytics to explore the relationship between socioeconomic factors and the spread of COVID-19 at the zip code level, aiming to predict high-risk areas for infection. The second part delves into the opioid crisis, utilizing semi-supervised deep learning techniques to monitor and categorize drug-related discussions on Reddit. The third part concentrates on developing spatial forecasting and providing explanations of the rising epidemic of drug overdose fatalities. The fourth part of the study extends to the realm of the hotel industry, aiming to optimize customer experience by analyzing online reviews and employing a localized Large Language Model to generate future customer trends and scenarios. Across these studies, the thesis aims to provide actionable insights and comprehensive solutions for effectively managing these major crises. For the first work, the majority of current research in pandemic modeling primarily relies on historical data to predict dynamic trends such as COVID-19. This work makes the following contributions in spatial COVID-19 pandemic forecasting: 1) the development of a unique model solely employing a wide range of socioeconomic indicators to forecast areas most susceptible to COVID-19, using detailed static spatial analysis, 2) identification of the most and least influential socioeconomic variables affecting COVID-19 transmission within communities, 3) construction of a comprehensive dataset that merges state-level COVID-19 statistics with corresponding socioeconomic attributes, organized by zip code. For the second work, we make the following contributions in detecting drug Abuse crisis via social media: 1) enhancing the Dynamic Query Expansion (DQE) algorithm to dynamically detect and extract evolving drug names in Reddit comments, utilizing a list curated from government and healthcare agencies, 2) constructing a textual Graph Convolutional Network combined with word embeddings to achieve fine-grained drug abuse classification in Reddit comments, identifying seven specific drug classes for the first time, 3) conducting extensive experiments to validate the framework, outperforming six baseline models in drug abuse classification and demonstrating effectiveness across multiple types of embeddings. The third study focuses on developing spatial forecasting and providing explanations of the escalating epidemic of drug overdose fatalities. Current research in this field has shown a deficiency in comprehensive explanations of the crisis, spatial analyses, and predictions of high-risk zones for drug overdoses. Addressing these gaps, this study contributes in several key areas: 1) Establishing a framework for spatially forecasting drug overdose fatalities predominantly affecting U.S. counties, 2) Proposing solutions for dealing with scarce and heterogeneous data sets, 3) Developing an algorithm that offers clear and actionable insights into the crisis, and 4) Conducting extensive experiments to validate the effectiveness of our proposed framework. In the fourth study, we address the profound impact of the pandemic on the hotel industry, focusing on the optimization of customer experience. Traditional methodologies in this realm have predominantly relied on survey data and limited segments of social media analytics. Those methods are informative but fall short of providing a full picture due to their inability to include diverse perspectives and broader customer feedback. Our study aims to make the following contributions: 1) the development of an integrated platform that distinguishes and extracts positive and negative Memorable Experiences (MEs) from online customer reviews within the hotel industry, 2) The incorporation of an advanced analytical module that performs temporal trend analysis of MEs, utilizing sophisticated data mining algorithms to dissect customer feedback on a monthly and yearly scale, 3) the implementation of an advanced tool that generates prospective and unexplored Memorable Experiences (MEs) by utilizing a localized Large Language Model (LLM) with keywords extracted from authentic customer experiences to aid hotel management in preparing for future customer trends and scenarios. Building on the integrated predictive modeling approaches developed in the earlier parts of this dissertation, this final section explores the significant impacts of the COVID-19 pandemic on the airline industry. The pandemic has precipitated substantial financial losses and operational disruptions, necessitating innovative crisis management strategies within this sector. This study introduces a novel analytical framework, EAGLE (Enhancing Airline Groundtruth Labels and Review rating prediction), which utilizes Large Language Models (LLMs) to improve the accuracy and objectivity of customer sentiment analysis in strategic airline route planning. EAGLE leverages LLMs for zero-shot pseudo-labeling and zero-shot text classification, to enhance the processing of customer reviews without the biases of manual labeling. This approach streamlines data analysis, and refines decision-making processes which allows airlines to align route expansions with nuanced customer preferences and sentiments effectively. The comprehensive application of LLMs in this context underscores the potential of predictive analytics to transform traditional crisis management strategies by providing deeper, more actionable insights. / Doctor of Philosophy / In today's digital age, where vast amounts of data are generated every second, understanding and managing crises like pandemics or economic disruptions has become increasingly crucial. This dissertation explores the use of advanced predictive modeling and analytics to manage various crises, significantly enhancing how predictions and responses to these challenges are developed. The first part of the research uses data analysis to identify areas at higher risk during the COVID-19 pandemic, focusing on how different socioeconomic factors can affect virus spread at a local level. This approach moves beyond traditional methods that rely on past data, providing a more dynamic way to forecast and manage public health crises. The study then examines the opioid crisis by analyzing social media platforms like Reddit. Here, a method was developed to automatically detect and categorize discussions about drug abuse. This technique aids in understanding how drug-related conversations evolve online, providing insights that could guide public health responses and policy-making. In the hospitality sector, customer reviews were analyzed to improve service quality in hotels. By using advanced data analysis tools, key trends in customer experiences were identified, which can help businesses adapt and refine their services in real-time, enhancing guest satisfaction. Finally, the study extends to the airline industry, where a model was developed that uses customer feedback to improve airline services and route planning. This part of the research shows how sophisticated analytics can help airlines better understand and meet traveler needs, especially during disruptions like the pandemic. Overall, the dissertation provides methods to better manage crises and illustrates the vast potential of predictive analytics in making informed decisions that can significantly mitigate the impacts of future crises. This research is vital for anyone—from government officials to business leaders—looking to harness the power of data for crisis management and decision-making.

Page generated in 0.0795 seconds