Return to search

Resource Clogging Attacks in Mobile Crowd-Sensing: AI-based Modeling, Detection and Mitigation

Mobile Crowdsensing (MCS) has emerged as a ubiquitous solution for data collection from embedded sensors of the smart devices to improve the sensing capacity and reduce the sensing costs in large regions. Due to the ubiquitous nature of MCS, smart devices require cyber protection against adversaries that are becoming smarter with the objective of clogging the resources and spreading misinformation in such a non-dedicated sensing environment. In an MCS setting, one of the various adversary types has the primary goal of keeping participant devices occupied by submitting fake/illegitimate sensing tasks so as to clog the participant resources such as the battery, sensing, storage, and computing. With this in mind, this thesis proposes a systematical study of fake task injection in MCS, including modeling, detection, and mitigation of such resource clogging attacks.
We introduce modeling of fake task attacks in MCS intending to clog the server and drain battery energy from mobile devices. We creatively grant mobility to the tasks for more extensive coverage of potential participants and propose two take movement patterns, namely Zone-free Movement (ZFM) model and Zone-limited Movement (ZLM) model. Based on the attack model and task movement patterns, we design task features and create structured simulation settings that can be modified to adapt different research scenarios and research purposes.
Since the development of a secure sensing campaign highly depends on the existence of a realistic adversarial model. With this in mind, we apply the self-organizing feature map (SOFM) to maximize the number of impacted participants and recruits according to the user movement pattern of these cities. Our simulation results verify the magnified effect of SOFM-based fake task injection comparing with randomly selected attack regions in terms of more affected recruits and participants, and increased energy consumption in the recruited devices due to the illegitimate task submission.
For the sake of a secure MCS platform, we introduce Machine Learning (ML) methods into the MCS server to detect and eliminate the fake tasks, making sure the tasks arrived at the user side are legitimate tasks. In our work, two machine learning algorithms, Random Forest and Gradient Boosting are adopted to train the system to predict the legitimacy of a task, and Gradient Boosting is proven to be a more promising algorithm. We have validated the feasibility of ML in differentiating the legitimacy of tasks in terms of precision, recall, and F1 score. By comparing the energy-consuming, effected recruits, and impacted candidates with and without ML, we convince the efficiency of applying ML to mitigate the effect of fake task injection.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/40082
Date17 January 2020
CreatorsZhang, Yueqian
ContributorsKantarci, Burak
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf

Page generated in 0.0025 seconds