• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 18
  • 8
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 116
  • 65
  • 56
  • 49
  • 47
  • 47
  • 44
  • 43
  • 38
  • 31
  • 30
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Is Big data too Big for Swedish SMEs? : A quantitative study examining how the employees of small and medium-sized enterprises perceive Big data analytics

Danielsson, Lukas, Toss, Ronja January 2018 (has links)
Background:  Marketing is evolving because of Big data, and there are a lot of possibilities as well as challenges associated with Big data, especially for small and medium-sized companies (SMEs), who face barriers that prevent them from taking advantage of Big data. For companies to analyze Big data, Big data analytics are used which helps companies analyze large amounts of data. However, previous research is lacking in regard to how SMEs can implement Big data analytics and how Big data analytics are perceived by SMEs. Purpose:  The purpose of this study is to investigate how the employees of Swedish SMEs perceive Big data analytics. Research Questions: How do employees of Swedish SMEs perceive Big data analytics in their current work environment? How do the barriers impact the perceptions of Big data analytics? Methodology: The research proposes a quantitative cross-sectional design as the source of empirical data. To gather the data, a survey was administered to the employees of Swedish companies that employed less than 250 people, these companies were regarded as SMEs. 139 answered the survey and out of those, the analysis was able to use 93 of the answers. The data was analyzed using previous theories, such as the Technology Acceptance Model (TAM). Findings: The research concluded that the employees had positive perceptions about Bigdata analytics. Further, the research concluded that two of the barriers (security and resources) analyzed impacted the perceptions of the employees, whereas privacy of personal data did not. Theoretical Implications: This study adds to the lacking Big data research and improves the understanding of Big data and Big data analytics. The study also adds to the existing gap in literature to provide a more comprehensive view of Big data. Limitations: The main limitation of the study was that previous literature has been vague and ambiguous and therefore may not be applicable. Practical Implications: The study helps SMEs understand how to better implement Big data analytics and what barriers need to be prioritized regarding Big data analytics. Originality: To the best of the author’s knowledge, there is a significant lack of academic literature regarding Big data, Big data analytics and Swedish SMEs, therefore this study could be one of the pioneer studies examining these topics which will significantly contribute to current research.
152

A unified framework for real-time streaming and processing of IoT data

Zamam, Mohamad January 2017 (has links)
The emergence of the Internet of Things (IoT) is introducing a new era to the realm of computing and technology. The proliferation of sensors and actuators that are embedded in things enables these devices to understand the environments and respond accordingly more than ever before. Additionally, it opens the space to unlimited possibilities for building applications that turn this sensation into big benefits, and within various domains. From smart cities to smart transportation and smart environment and the list is quite long. However, this revolutionary spread of IoT devices and technologies rises big challenges. One major challenge is the diversity in IoT vendors that results in data heterogeneity. This research tackles this problem by developing a data management tool that normalizes IoT data. Another important challenge is the lack of practical IoT technology with low cost and low maintenance. That has often limited large-scale deployments and mainstream adoption. This work utilizes open-source data analytics in one unified IoT framework in order to address this challenge. What is more, billions of connected things are generating unprecedented amounts of data from which intelligence must be derived in real-time. This unified framework processes real-time streams of data from IoT. A questionnaire that involved participants with background knowledge in IoT was conducted in order to collect feedback about the proposed framework. The aspects of the framework were presented to the participants in a form of demonstration video describing the work that has been done. Finally, using the participants’ feedback, the contribution of the developed framework to the IoT was discussed and presented.
153

Big data analytics em cloud gaming: um estudo sobre o reconhecimento de padrões de jogadores

Barros, Victor Perazzolo 06 February 2017 (has links)
Submitted by Rosa Assis (rosa_assis@yahoo.com.br) on 2017-11-14T18:05:03Z No. of bitstreams: 2 VICTOR PERAZZOLO BARROS.pdf: 24134660 bytes, checksum: 8761000aa9ba093f81a14b3c2368f2b7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Paola Damato (repositorio@mackenzie.br) on 2017-11-27T12:14:38Z (GMT) No. of bitstreams: 2 VICTOR PERAZZOLO BARROS.pdf: 24134660 bytes, checksum: 8761000aa9ba093f81a14b3c2368f2b7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-11-27T12:14:38Z (GMT). No. of bitstreams: 2 VICTOR PERAZZOLO BARROS.pdf: 24134660 bytes, checksum: 8761000aa9ba093f81a14b3c2368f2b7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-02-06 / The advances in Cloud Computing and communication technologies enabled the concept of Cloud Gaming to become a reality. Through PCs, consoles, smartphones, tablets, smart TVs and other devices, people can access and use games via data streaming, regardless the computing power of these devices. The Internet is the fundamental way of communication between the device and the game, which is hosted and processed on an environment known as Cloud. In the Cloud Gaming model, the games are available on demand and offered in large scale to the users. The players' actions and commands are sent to servers that process the information and send the result (reaction) back to the players. The volume of data processed and stored in these Cloud environments exceeds the limits of analysis and manipulation of conventional tools, but these data contains information about the players' profile, its singularities, actions, behavior and patterns that can be valuable when analyzed. For a proper comprehension and understanding of this raw data and to make it interpretable, it is necessary to use appropriate techniques and platforms to manipulate this amount of data. These platforms belong to an ecosystem that involves the concepts of Big Data. The model known as Big Data Analytics is an effective and capable way to, not only work with these data, but understand its meaning, providing inputs for assertive analysis and predictive actions. This study searches to understand how these technologies works and propose a method capable to analyze and identify patterns in players' behavior and characteristics on a virtual environment. By knowing the patterns of different players, it is possible to group and compare information, in order to optimize the user experience, revenue for developers and raise the level of control over the environment in a way that players' actions can be predicted. The results presented are based on different analysis modeling using the Hadoop technology combined with data visualization tools and information from open data sources in a dataset of the World of Warcraft game. Fraud detection, users' game patterns, churn prevention inputs and relations with game attractiveness elements are examples of modeling used. In this research, it was possible to map and identify the players' behavior patterns and create a prediction of its frequency and tendency to evade or stay in the game. / Os avanços das tecnologias de Computacão em Nuvem (Cloud Computing) e comunicações possibilitaram o conceito de Jogos em Nuvem (Cloud Gaming) se tornar uma realidade. Por meio de computadores, consoles, smartphones, tablets, smart TVs e outros equipamentos é possível acessar via streaming e utilizar jogos independentemente da capacidade computacional destes dispositivos. Os jogos são hospedados e executados em um ambiente computacional conhecido como Nuvem, a Internet é o meio de comunicação entre estes dispositivos e o jogo. No modelo conhecido como Cloud Gaming, compreendesse que os jogos são disponibilizados sob demanda para os usuários e podem ser oferecidos em larga escala. Os comandos e ações dos jogadores são enviados para servidores que processam a informação e enviam o resultado (reação) para o jogador. A quantidade de dados que são processados e armazenados nestes ambientes em Nuvem superam os limites de análise e manipulação de plataformas convencionais, porém tais dados contém informacões sobre o perfil dos jogadores, suas particularidades, ações, comportamentos e padrões que podem ser importantes quando analisados. Para uma devida compreensão e lapidação destes dados brutos, a fim de torná-los interpretáveis, se faz necessário o uso de técnicas e plataformas apropriadas para manipulação desta quantidade de dados. Estas plataformas fazem parte de um ecossistema que envolvem os conceitos de Big Data. Arquiteturas e ferramentas de Big Data, mais especificamente, o modelo denominado Big Data Analytics, são instrumentos eficazes e capazes de não somente trabalhar com estes dados, mas entender seu significado, fornecendo insumos para análise assertiva e predição de acões. O presente estudo busca compreender o funcionamento destas tecnologias e fornecer um método capaz de identificar padrões nos comportamentos e características dos jogadores em ambiente virtual. Conhecendo os padrões de diferentes usuários é possível agrupar e comparar as informações, a fim de otimizar a experiência destes usuários no jogo, aumentar a receita para os desenvolvedores e elevar o nível de controle sobre o ambiente ao ponto que seja possível de prever ações futuras dos jogadores. Os resultados obtidos são derivados de diferentes modelagens de análise utilizando a tecnologia Hadoop combinada com ferramentas de visualização de dados e informações de fontes de dados abertas, em um dataset do jogo World of Warcraft. Detecção de fraude, padrões de jogo dos usuários, insumos para prevencão de churn e relações com elementos de atratividade no jogo, são exemplos de modelagens abordadas. Nesta pesquisa foi possível mapear e identificar os padrões de comportamento dos jogadores e criar uma previsão e tendência de assiduidade sobre evasão ou permanencia de usuários no jogo.
154

Big Data Analytics for Fault Detection and its Application in Maintenance / Big Data Analytics för Feldetektering och Applicering inom Underhåll

Zhang, Liangwei January 2016 (has links)
Big Data analytics has attracted intense interest recently for its attempt to extract information, knowledge and wisdom from Big Data. In industry, with the development of sensor technology and Information & Communication Technologies (ICT), reams of high-dimensional, streaming, and nonlinear data are being collected and curated to support decision-making. The detection of faults in these data is an important application in eMaintenance solutions, as it can facilitate maintenance decision-making. Early discovery of system faults may ensure the reliability and safety of industrial systems and reduce the risk of unplanned breakdowns. Complexities in the data, including high dimensionality, fast-flowing data streams, and high nonlinearity, impose stringent challenges on fault detection applications. From the data modelling perspective, high dimensionality may cause the notorious “curse of dimensionality” and lead to deterioration in the accuracy of fault detection algorithms. Fast-flowing data streams require algorithms to give real-time or near real-time responses upon the arrival of new samples. High nonlinearity requires fault detection approaches to have sufficiently expressive power and to avoid overfitting or underfitting problems. Most existing fault detection approaches work in relatively low-dimensional spaces. Theoretical studies on high-dimensional fault detection mainly focus on detecting anomalies on subspace projections. However, these models are either arbitrary in selecting subspaces or computationally intensive. To meet the requirements of fast-flowing data streams, several strategies have been proposed to adapt existing models to an online mode to make them applicable in stream data mining. But few studies have simultaneously tackled the challenges associated with high dimensionality and data streams. Existing nonlinear fault detection approaches cannot provide satisfactory performance in terms of smoothness, effectiveness, robustness and interpretability. New approaches are needed to address this issue. This research develops an Angle-based Subspace Anomaly Detection (ABSAD) approach to fault detection in high-dimensional data. The efficacy of the approach is demonstrated in analytical studies and numerical illustrations. Based on the sliding window strategy, the approach is extended to an online mode to detect faults in high-dimensional data streams. Experiments on synthetic datasets show the online extension can adapt to the time-varying behaviour of the monitored system and, hence, is applicable to dynamic fault detection. To deal with highly nonlinear data, the research proposes an Adaptive Kernel Density-based (Adaptive-KD) anomaly detection approach. Numerical illustrations show the approach’s superiority in terms of smoothness, effectiveness and robustness.
155

零售藥妝顧客購買頻率與利潤之分析 / Analysis of Customer Purchase Frequency and Profitability in Retail Pharmacy Stores

黃兆椿 Unknown Date (has links)
本研究主要探討藥妝零售產業提升預測顧客行為的模型與方法,並以RFM模型為基礎進行延伸。RFM模型在行銷領域中是廣泛被使用的模型,具有良好預測和分群顧客的能力,本研究在此模型中加入了兩項新指標:集中度 (C) 和 廣度 (B),並針對顧客的「交易頻率」和「交易利潤」進行分析,藉此找出優於RFM的指標組合。首先將RFM、C、B共五項指標進行排列組合,並以迴歸分析驗證新增的兩項指標能顯著提升模型解釋能力,接著將RFM指標組合及RFMCB指標組合分別作為機器學習方法的解釋變數以預測顧客行為。對顧客交易頻率而言,C和B兩項指標的加入能顯著提升其預測能力,對顧客交易利潤而言,新指標的加入,平均而言對於預測精準度有所提升,但在部分資料中會使誤差值增加以致整體誤差的最大值有所提升。 / This research proposes modeling techniques to better predict customer behaviors in the retail industry. Extending the widely-adopted RFM model in marketing, we introduce two new metrics – clumpiness (C) and breadth (B). Using more than two million transaction records from over 100 retail pharmacy stores in Taiwan, we fit a set of regression models, in which we assess the explanatory power of different combinations of RFMCB for customer purchase frequency and profitability. Our analysis shows that the RFM model is significantly inferior to models with C and/or B, suggesting that C and B are indeed promising metrics. In the next stage, we will apply machine learning methods to incorporate C and B into predictive models and assess their out-of-sample prediction performance. On Average, RFMCB outperforms RFM in predicting Frequency & Profit. However, there are some cases where RFMCB leads to larger prediction error.
156

Robust Modeling and Predictions of Greenhouse Gas Fluxes from Forest and Wetland Ecosystems

Ishtiaq, Khandker S 12 November 2015 (has links)
The land-atmospheric exchanges of carbon dioxide (CO2) and methane (CH4) are major drivers of global warming and climatic changes. The greenhouse gas (GHG) fluxes indicate the dynamics and potential storage of carbon in terrestrial and wetland ecosystems. Appropriate modeling and prediction tools can provide a quantitative understanding and valuable insights into the ecosystem carbon dynamics, while aiding the development of engineering and management strategies to limit emissions of GHGs and enhance carbon sequestration. This dissertation focuses on the development of data-analytics tools and engineering models by employing a range of empirical and semi-mechanistic approaches to robustly predict ecosystem GHG fluxes at variable scales. Scaling-based empirical models were developed by using an extended stochastic harmonic analysis algorithm to achieve spatiotemporally robust predictions of the diurnal cycles of net ecosystem exchange (NEE). A single set of model parameters representing different days/sites successfully estimated the diurnal NEE cycles for various ecosystems. A systematic data-analytics framework was then developed to determine the mechanistic, relative linkages of various climatic and environmental drivers with the GHG fluxes. The analytics, involving big data for diverse ecosystems of the AmeriFLUX network, revealed robust latent patterns: a strong control of radiation-energy variables, a moderate control of temperature-hydrology variables, and a relatively weak control of aerodynamic variables on the terrestrial CO2 fluxes. The data-analytics framework was then employed to determine the relative controls of different climatic, biogeochemical and ecological drivers on CO2 and CH4 fluxes from coastal wetlands. The knowledge was leveraged to develop nonlinear, predictive models of GHG fluxes using a small set of environmental variables. The models were presented in an Excel spreadsheet as an ecological engineering tool to estimate and predict the net ecosystem carbon balance of the wetland ecosystems. The research also investigated the emergent biogeochemical-ecological similitude and scaling laws of wetland GHG fluxes by employing dimensional analysis from fluid mechanics. Two environmental regimes were found to govern the wetland GHG fluxes. The discovered similitude and scaling laws can guide the development of data-based mechanistic models to robustly predict wetland GHG fluxes under a changing climate and environment.
157

Využitie dátovej analýzy v internom a externom audite / The Use of Data Analytics in Internal and External Audit

Tecáková, Andrea January 2015 (has links)
Data Analytics is one of the fast-developing applications of IT in organizations worldwide. This Master's thesis examines data analytics in the context of internal and external audit. Principal aim of the thesis is to identify the opportunities for data analytics application in both audit disciplines. Secondary goal is to design a data-analytical procedure, apply it to actual business data and thus demonstrate the benefits of employing data analytics. The thesis builds on a summary of theoretical sources of the relevant area, followed by a survey conducted by the author. The survey maps current state of data analytics usage in both internal and external audit in the Czech Republic. The added value of this thesis is, apart from the identification of audit areas in which it is beneficial to use data analytics, the design of an analytical procedure and its application. Another benefit is the survey revealing current state of the art and the insights of interviewed auditors, pointing to both benefits and problems of data analytics application to the performance of the audit profession.
158

A Generalized Adaptive Mathematical Morphological Filter for LIDAR Data

Cui, Zheng 14 November 2013 (has links)
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth’s surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in “cut-off” errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most “cut-off” points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
159

Effect of Big Data Analytics on Audit : An exploratory qualitative study of data analytics on auditors’ skills and competence, perception of professional judgment, audit efficiency and audit quality

Alsahli, Mohamad, Kandeh, Hamadou January 2020 (has links)
Abstract Purpose: The primary goal of this thesis is to provide a deeper understanding of how big data affect professional judgment, audit efficiency, and perceived audit quality. It also aims to explore the effect of Big Data Analytics (BDA) on the skills and competence required by auditors to perform an audit in a big data environment. Theoretical perspectives: Theoretical concepts base on previous research and publications by practitioners and regulators on BDA, professional judgment, audit efficiency, and audit quality. Literature was used to derive the research gap and research questions. Methodology: A qualitative method base exploratory approach. A literature review was conducted to uncover areas of interest that require more research. The effect of data analytics on the audit was identified as a potential area for research; a focus on audit quality was chosen, including key factors that contribute to overall audit quality. The research is based on semi-structured interviews with auditors from big four audit firms in Sweden. Empirical foundation: Empirical evidence was generated through an interview with seven auditors at different levels of the professional hierarchy. Empirical data was analyzed using a thematic data analysis approach. Conclusions: The findings of this research show that using BDA in the audit methodology affect the required skills and competence by auditors to carry out audit engagement activities. More IT related skills and knowledge gaining prominent in the audit field. Implementing data analytics will not be efficient in the early stage but will save time as auditors become more familiar with the tools. Data analytics improve audit quality. Auditors use analytics to gain more insight into the client’s business and communicate such insights to clients. It was found that data analytics generate fact-based audit evidence. The visualization ability enables auditors to visualize and analyze audit evidence to guide their professional judgment and decision making. Key words: Big data, Data analytics, Auditors skills and competence, Audit process, Audit efficiency, Audit quality and Professional judgment.
160

Anomaly Detection Techniques for the Protection of Database Systems against Insider Threats

Asmaa Mohamed Sallam (6387488) 15 May 2019 (has links)
The mitigation of insider threats against databases is a challenging problem since insiders often have legitimate privileges to access sensitive data. Conventional security mechanisms, such as authentication and access control, are thus insufficient for the protection of databases against insider threats; such mechanisms need to be complemented with real-time anomaly detection techniques. Since the malicious activities aiming at stealing data may consist of multiple steps executed across temporal intervals, database anomaly detection is required to track users' actions across time in order to detect correlated actions that collectively indicate the occurrence of anomalies. The existing real-time anomaly detection techniques for databases can detect anomalies in the patterns of referencing the database entities, i.e., tables and columns, but are unable to detect the increase in the sizes of data retrieved by queries; neither can they detect changes in the users' data access frequencies. According to recent security reports, such changes are indicators of potential data misuse and may be the result of malicious intents for stealing or corrupting the data. In this thesis, we present techniques for monitoring database accesses and detecting anomalies that are considered early signs of data misuse by insiders. Our techniques are able to track the data retrieved by queries and sequences of queries, the frequencies of execution of periodic queries and the frequencies of referencing the database tuples and tables. We provide detailed algorithms and data structures that support the implementation of our techniques and the results of the evaluation of their implementation.<br>

Page generated in 0.04 seconds