• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 16
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 265
  • 265
  • 111
  • 64
  • 56
  • 47
  • 45
  • 44
  • 41
  • 41
  • 38
  • 31
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Increasing public's value-action on climate change: Integrating intelligence analytics to edge devices in industry 4.0

Fauzi, Muhammad Alfalah, Saragih, Harriman Samuel, Dwiandani, Amalia 12 March 2020 (has links)
Rapid growth of Big Data and Internet of Things (IoT) provides promising potentials to the advancements of methods and applications in increasing public awareness on climate change. The fundamental principle behind this method is to provide quantifiable calculation approach on several major factors that affect climate change, where one of the most well-known factors is the Greenhouse Gases (GHG) with CO2, methane, and nitrous oxide as major contributors. By utilizing Big Data and IoT, an approximate release of GHG can be calculated and embedded inside common household devices such as thermostats, water/heat/electricity/gas meter. An example is the CO2 released by a cubic of water. By using reverse calculation, an approximate CO2 release can be sequentially retrieved as follows: (1) water meter measures consumption, (2) calculate hp and kWh of pump used to supply one m3 of water, (3) calculate the amount of fossil fuel needed to produce one kWh, and (4) calculate CO2 released to the atmosphere from burning of fossil fuel per metric tons/barrel. Such analytical approaches are then embedded on household devices by providing updated information on GHG produced by hourly/daily/weekly/monthly energy usage, hence educating the public and increasing their awareness of climate change. This approach can be developed to provide an alarm of percentage of GHG released to the atmosphere by the excessive use of electricity/water/gas. Further actions in order to influence socio-economic function can later be established such as by establishing a rewards program by the government for people who can successfully manage their GHG emission.
32

Integrated Real-Time Social Media Sentiment Analysis Service Using a Big Data Analytic Ecosystem

Aring, Danielle C. 15 May 2017 (has links)
No description available.
33

Mining of High-Utility Patterns in Big IoT-based Databases

Wu, Jimmy M. T., Srivastava, Gautam, Lin, Jerry C., Djenouri, Youcef, Wei, Min, Parizi, Reza M., Khan, Mohammad S. 01 February 2021 (has links)
When focusing on the general area of data mining, high-utility itemset mining (HUIM) can be defined as an offset of frequent itemset mining (FIM). It is known to emphasize more factors critically, which gives HUIM its intrinsic edge. Due to the flourishing development of the IoT technique, the uncertainty patterns mining is also attractive. Potential high-utility itemset mining (PHUIM) is introduced to reveal valuable patterns in an uncertainty database. Unfortunately, even though the previous methods are all very effective and powerful to mine, the potential high-utility itemsets quickly. These algorithms are not specifically designed for a database with an enormous number of records. In the previous methods, uncertainty transaction datasets would be load in the memory ultimately. Usually, several pre-defined operators would be applied to modify the original dataset to reduce the seeking time for scanning the data. However, it is impracticable to apply the same way in a big-data dataset. In this work, a dataset is assumed to be too big to be loaded directly into memory and be duplicated or modified; then, a MapReduce framework is proposed that can be used to handle these types of situations. One of our main objectives is to attempt to reduce the frequency of dataset scans while still maximizing the parallelization of all processes. Through in-depth experimental results, the proposed Hadoop algorithm is shown to perform strongly to mine all of the potential high-utility itemsets in a big-data dataset and shows excellent performance in a Hadoop computing cluster.
34

Leveraging business Intelligence and analytics to improve decision-making and organisational success

Mushore, Rutendo January 2017 (has links)
In a complex and dynamic organisational environment, challenges and dilemmas exist on how to maximise the value of Business Intelligence and Analytics (BI&A). The expectation of BI&A is to improve decision-making for core business processes that drive business performance. A multi-disciplinary review of theories from the domains of strategic management, technology adoption and economics claims that tasks, technology, people and structures (TTPS) need to be aligned for BI&A to add value to decision-making. However, these imperatives interplay, making it difficult to determine how they are configured. Whilst the links between TTPS have been previously recognised in the Socio-Technical Systems theory, no studies have delved into the issue of their configuration. This configuration is addressed in this study by adopting the fit as Gestalts approach, which examines the relationships among these elements and also determines how best to align them. A Gestalt looks at configurations that arise based on the level of coherence and helps determine the level of alignment amongst complex relationships. This study builds on an online quantitative survey tool based on a conceptual model for aligning TTPS. The alignment model contributes to the conceptual development of alignment of TTPS. Data was collected from organisations in a South African context. Individuals who participated in the survey came from the retail, insurance, banking, telecommunications and manufacturing industry sectors. This study's results show that there is close alignment that emerges between TTPS in Cluster 6 which comprises of IT experts and financial planners. Adequate training, coupled with structures encouraging usage of Business Intelligence and Analytics (BI&A), result in higher organisational success. This is because BI&A technology is in sync with the tasks it is being used for and users have high self-efficacy. Further analysis shows that poor organisational performance can be linked to gaps in alignment and the lack of an organisational culture that motivates usage of BI&A tools. This is because there is misalignment; therefore respondents do not find any value in using BI&A, thus impacting organisational performance. Applying a configurational approach helps researchers and practitioners identify coherent patterns that work well cohesively and comprehensively. The tangible contribution of this study is the conceptual model presented to achieve alignment. In essence, organisations can use the model for aligning tasks, technology, people and structures to better identify ideal configurations of the factors which are working cohesively and consequently find ways of leveraging Business intelligence and Analytics.
35

Development of Pattern of Life using Social Media Metadata

Mace, Douglas S., II 11 July 2019 (has links)
No description available.
36

LOW RANK AND SPARSE MODELING FOR DATA ANALYSIS

Kang, Zhao 01 May 2017 (has links) (PDF)
Nowadays, many real-world problems must deal with collections of high-dimensional data. High dimensional data usually have intrinsic low-dimensional representations, which are suited for subsequent analysis or processing. Therefore, finding low-dimensional representations is an essential step in many machine learning and data mining tasks. Low-rank and sparse modeling are emerging mathematical tools dealing with uncertainties of real-world data. Leveraging on the underlying structure of data, low-rank and sparse modeling approaches have achieved impressive performance in many data analysis tasks. Since the general rank minimization problem is computationally NP-hard, the convex relaxation of original problem is often solved. One popular heuristic method is to use the nuclear norm to approximate the rank of a matrix. Despite the success of nuclear norm minimization in capturing the low intrinsic-dimensionality of data, the nuclear norm minimizes not only the rank, but also the variance of matrix and may not be a good approximation to the rank function in practical problems. To mitigate above issue, this thesis proposes several nonconvex functions to approximate the rank function. However, It is often difficult to solve nonconvex problem. In this thesis, an optimization framework for nonconvex problem is further developed. The effectiveness of this approach is examined on several important applications, including matrix completion, robust principle component analysis, clustering, and recommender systems. Another issue associated with current clustering methods is that they work in two separate steps including similarity matrix computation and subsequent spectral clustering. The learned similarity matrix may not be optimal for subsequent clustering. Therefore, a unified algorithm framework is developed in this thesis. To capture the nonlinear relations among data points, we formulate this method in kernel space. Furthermore, the obtained continuous spectral solutions could severely deviate from the true discrete cluster labels, a discrete transformation is further incorporated in our model. Finally, our framework can simultaneously learn similarity matrix, kernel, and discrete cluster labels. The performance of the proposed algorithms is established through extensive experiments. This framework can be easily extended to semi-supervised classification.
37

Data analytics for unemployment incurance claims : framework, approaches, and implementations strategies

Bergkvist, Jonathan January 2023 (has links)
Unemployment Insurance serves as a vital economic stabiliser, offering financial assistance and promoting workforce reintegration. In Sweden, occupation-specific unemployment funds, known as "Arbetslöshetskassan" (A-KASSAN), manage these claims. New complex challenges pertaining to A-KASSAN's decision-making process and unemployment insurance claims necessitate a holistic data analytics framework, innovative modelling approaches, and effective implementation strategies.  This study aims to establish a comprehensive approach to data analytics for unemployment insurance claims to provide a more accurate prediction model to aid A-KASSAN's decision-making. It accomplishes this through three main objectives: the development of a thorough framework employing management data analytics for claim analysis; advancement in modelling approaches to predict unemployment trends; and deliberation on effective strategies to visualise the developed solutions.  Drawing on Data Science, Computer Science, and Economics and Management Science, this study has crafted a four-tiered comprehensive framework encompassing descriptive, diagnostic, predictive, and prescriptive analytics. It has explored novel methodologies, formulated a model library, devised rules for result integration, and validated these through case studies. The model library showcases diverse models from Economic modelling, Statistical modelling, Big Data analytics with Machine Learning and Deep Learning, alongside hybrid modelling strategies. This study primarily concentrates on developing visualisation tools as an implementation strategy. In a summary, this study provides A-KASSAN with an approach to overcome two central issues: the lack of a comprehensive data analytics approach for unemployment insurance claims, including a framework and predictive modelling, and a dearth of visualisation solutions for management data analytics pertinent to these claims.
38

What are the Factors that Influence the Adoption of Data Analytics and Artificial Intelligence in Auditing?

Tsao, Grace 01 January 2021 (has links)
Although past research finds that auditors support data analytics and artificial intelligence to enhance audit quality in their daily work, in reality, only a small number of audit firms, who innovated and invested in the two sophisticated technologies, utilize it in their auditing process. This paper analyzes three factors, including three individual theories, that may influence the adoption of data analytics and artificial intelligence in auditing: regulation (Institutional theory: explaining the catch-22 between the auditors and policymakers), knowledge barrier (Technology acceptance model's theory: explore the concept of ease of use), and people (algorithm aversion: a phenomenon that auditors believe in human decision makers more than technology). Among the three barriers, this paper focuses more on the people factor, which firms can start to overcome early. Past research has shown the existence of algorithm aversion in audit, so it is important to identify ways to decrease algorithm aversion. This study conducted a survey with four attributes: transparency-efficiency-trade-off, positive exposure, imperfect algorithm, and company's training. The study results shows that transparency-efficiency-trade-off can be a potential solution for decreasing algorithm aversion. When auditor firms implement transparency-efficiency-trade-off in their company training, auditors may give more trust to the technologies. The trust may lead to the increase of data analytics and artificial intelligence in audit.
39

Using Data Analytics to Understand Student Support in STEM for Nontraditional Students

Aglonu, Kingdom 02 May 2023 (has links)
No description available.
40

Understanding the Implication of Blockchain Technology on the Audit Profession

Jackson, Brittany 01 January 2018 (has links)
The purpose of this research is to identify the implications of blockchain technology on the auditing profession. By conducting interviews with current professionals in the auditing profession, as well as those in academic with a background in auditing, primary data was collected to aggregate what potential effects will be on the auditing profession in the next five years and the next decade. The data includes assumptions of how the accounting major itself, the auditing planning phase, assumptions of risk, and audit completions will change with the developing technology. The goal of this research is a better understanding of how auditing will be affected by blockchain technology for students, current audit professionals, and those in academia. With the results, it was concluded that training of new and current employees will need to evolve with more emphasis on IT skills and analytical reasoning, blockchain's development is on a precipice of adoption within the next decade, and that there is a current gap regarding regulation of blockchain technology.

Page generated in 0.0776 seconds