• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 14
  • 14
  • 14
  • 6
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

USER CONTROLLED PRIVACY BOUNDARIES FOR SMART HOMES

Ryan David Fraser (15299059) 17 April 2023 (has links)
<p>  </p> <p>The rise of Internet of Things (IoT) technologies into the substantial commercial market that it is today comes with several challenges. Not only do these systems face the traditional challenges of security and reliability faced by traditional information technology (IT) products, but they also face the challenge of loss of privacy. The concern of user data privacy is most prevalent when these technologies come into the home environment. In this dissertation quasi-experimental research is conducted on the ability of users to protect private data in a heterogeneous smart home network. For this work the experiments are conducted and verified on eight different smart home devices using network traffic analysis and discourse analysis to identify privacy concerns. The results of the research show that data privacy within the confines of the user’s home often cannot be ensured while maintaining smart home device functionality. This dissertation discusses how those results can inform users and manufacturers alike in the use and development of future smart home technologies to better protect privacy concerns.</p>
2

MEMBERSHIP INFERENCE ATTACKS AND DEFENSES IN CLASSIFICATION MODELS

Jiacheng Li (17775408) 12 January 2024 (has links)
<p dir="ltr">Neural network-based machine learning models are now prevalent in our daily lives, from voice assistants~\cite{lopez2018alexa}, to image generation~\cite{ramesh2021zero} and chatbots (e.g., ChatGPT-4~\cite{openai2023gpt4}). These large neural networks are powerful but also raise serious security and privacy concerns, such as whether personal data used to train these models are leaked by these models. One way to understand and address this privacy concern is to study membership inference (MI) attacks and defenses~\cite{shokri2017membership,nasr2019comprehensive}. In MI attacks, an adversary seeks to infer if a given instance was part of the training data. We study the membership inference (MI) attack against classifiers, where the attacker's goal is to determine whether a data instance was used for training the classifier. Through systematic cataloging of existing MI attacks and extensive experimental evaluations of them, we find that a model's vulnerability to MI attacks is tightly related to the generalization gap---the difference between training accuracy and test accuracy. We then propose a defense against MI attacks that aims to close the gap by intentionally reduces the training accuracy. More specifically, the training process attempts to match the training and validation accuracies, by means of a new {\em set regularizer} using the Maximum Mean Discrepancy between the softmax output empirical distributions of the training and validation sets. Our experimental results show that combining this approach with another simple defense (mix-up training) significantly improves state-of-the-art defense against MI attacks, with minimal impact on testing accuracy. </p><p dir="ltr"><br></p><p dir="ltr">Furthermore, we considers the challenge of performing membership inference attacks in a federated learning setting ---for image classification--- where an adversary can only observe the communication between the central node and a single client (a passive white-box attack). Passive attacks are one of the hardest-to-detect attacks, since they can be performed without modifying how the behavior of the central server or its clients, and assumes {\em no access to private data instances}. The key insight of our method is empirically observing that, near parameters that generalize well in test, the gradient of large overparameterized neural network models statistically behave like high-dimensional independent isotropic random vectors. Using this insight, we devise two attacks that are often little impacted by existing and proposed defenses. Finally, we validated the hypothesis that our attack depends on the overparametrization by showing that increasing the level of overparametrization (without changing the neural network architecture) positively correlates with our attack effectiveness.</p><p dir="ltr">Finally, we observe that training instances have different degrees of vulnerability to MI attacks. Most instances will have low loss even when not included in training. For these instances, the model can fit them well without concerns of MI attacks. An effective defense only needs to (possibly implicitly) identify instances that are vulnerable to MI attacks and avoids overfitting them. A major challenge is how to achieve such an effect in an efficient training process. Leveraging two distinct recent advancements in representation learning: counterfactually-invariant representations and subspace learning methods, we introduce a novel Membership-Invariant Subspace Training (MIST) method to defend against MI attacks. MIST avoids overfitting the vulnerable instances without significant impact on other instances. We have conducted extensive experimental studies, comparing MIST with various other state-of-the-art (SOTA) MI defenses against several SOTA MI attacks. We find that MIST outperforms other defenses while resulting in minimal reduction in testing accuracy. </p><p dir="ltr"><br></p>
3

DIFFERENTIAL PRIVACY IN DISTRIBUTED SETTINGS

Zitao Li (14135316) 18 November 2022 (has links)
<p>Data is considered the "new oil" in the information society and digital economy. While many commercial activities and government decisions are based on data, the public raises more concerns about privacy leakage when their private data are collected and used. In this dissertation, we investigate the privacy risks in settings where the data are distributed across multiple data holders, and there is only an untrusted central server. We provide solutions for several problems under this setting with a security notion called differential privacy (DP). Our solutions can guarantee that there is only limited and controllable privacy leakage from the data holder, while the utility of the final results, such as model prediction accuracy, can be still comparable to the ones of the non-private algorithms.</p> <p><br></p> <p>First, we investigate the problem of estimating the distribution over a numerical domain while satisfying local differential privacy (LDP). Our protocol prevents privacy leakage in the data collection phase, in which an untrusted data aggregator (or a server) wants to learn the distribution of private numerical data among all users. The protocol consists of 1) a new reporting mechanism called the square wave (SW) mechanism, which randomizes the user inputs before sharing them with the aggregator; 2) an Expectation Maximization with Smoothing (EMS) algorithm, which is applied to aggregated histograms from the SW mechanism to estimate the original distributions.</p> <p><br></p> <p>Second, we study the matrix factorization problem in three federated learning settings with an untrusted server, i.e., vertical, horizontal, and local federated learning settings. We propose a generic algorithmic framework for solving the problem in all three settings. We introduce how to adapt the algorithm into differentially private versions to prevent privacy leakage in the training and publishing stages.</p> <p><br></p> <p>Finally, we propose an algorithm for solving the k-means clustering problem in vertical federated learning (VFL). A big challenge in VFL is the lack of a global view of each data point. To overcome this challenge, we propose a lightweight and differentially private set intersection cardinality estimation algorithm based on the Flajolet-Martin (FM) sketch to convey the weight information of the synopsis points. We provide theoretical utility analysis for the cardinality estimation algorithm and further refine it for better empirical performance.</p>
4

Analyzing and Improving Security-Enhanced Communication Protocols

Weicheng Wang (17349748) 08 November 2023 (has links)
<p dir="ltr">Security and privacy are one of the top concerns when experts select for communication protocols. When a protocol is confirmed with problems, such as leaking users’ privacy, the protocol developers will upgrade it to an advanced version to cover those concerns in a short interval, or the protocol will be discarded or replaced by other secured ones. </p><p dir="ltr">There are always communication protocols failing to protect users’ privacy or exposing users’ accounts under attack. A malicious user or an attacker can utilize the vulnerabilities in the protocol to gain private information, or even take control of the users’ devices. Hence, it is important to expose those protocols and improve them to enhance the security properties. Some protocols protect users’ privacy but in a less efficient way. Due to the new cryptography technique or the modern hardware support, the protocols can be improved with less overhead and enhanced security protection. </p><p dir="ltr">In this dissertation, we focus on analyzing and improving security-enhanced communication protocols in three aspects: </p><p dir="ltr">(1) We systematically analyzed an existing and widely used communication protocol: Zigbee. We identified the vulnerabilities of the existing Zigbee protocols during the new device joining process and proposed a security-enhanced Zigbee protocol. The new protocol utilized public-key primitives with little extra overhead with capabilities to protect against the outsourced attackers. The new protocol is formally verified and implemented with a prototype. </p><p dir="ltr">(2) We explored one type of communication detection system: Keyword-based deep packet inspection. The system has several protocols, such as BlindBox, PrivDPI, PE-DPI, mbTLS, and so on. We analyzed those protocols and identified their vulnerabilities or inefficiencies. To address those issues, we proposed three enhanced protocols: MT-DPI, BH-DPI, and CE-DPI which work readily with AES-based encryption schemes deployed and well-supported by AES-NI. Specifically, MT-DPI utilized multiplicative triples to support multi-party computation. </p><p dir="ltr">(3) We developed a technique to support Distributed confidential computing with the use of a trusted execution environment. We found that the existing confidential computing cannot handle multiple-stakeholder scenarios well and did not give reasonable control over derived data after computation. We analyzed six real use cases and pointed out what is missing in the existing solutions. To bridge the gap, we developed a language SeDS policy that was built on top of the trusted execution environment. It works well for specific privacy needs during the collaboration and gives protection over the derived data. We examined the language in the use cases and showed the benefits of applying the new policies.</p>
5

DIFFERENTIALLY PRIVATE SUBLINEAR ALGORITHMS

Tamalika Mukherjee (16050815) 07 June 2023 (has links)
<p>Collecting user data is crucial for advancing machine learning, social science, and government policies, but the privacy of the users whose data is being collected is a growing concern. {\em Differential Privacy (DP)} has emerged as the most standard notion for privacy protection with robust mathematical guarantees. Analyzing such massive amounts of data in a privacy-preserving manner motivates the need to study differentially-private algorithms that are also super-efficient.  </p> <p><br></p> <p>This thesis initiates a systematic study of differentially-private sublinear-time and sublinear-space algorithms. The contributions of this thesis are two-fold. First, we design some of the first differentially private sublinear algorithms for many fundamental problems. Second, we develop general DP techniques for designing differentially-private sublinear algorithms. </p> <p><br></p> <p>We give the first DP sublinear algorithm for clustering by generalizing a subsampling framework from the non-DP sublinear-time literature. We give the first DP sublinear algorithm for estimating the maximum matching size. Our DP sublinear algorithm for estimating the average degree of the graph achieves a better approximation than previous works. We give the first DP algorithm for releasing $L_2$-heavy hitters in the sliding window model and a pure $L_1$-heavy hitter algorithm in the same model, which improves upon previous works.  </p> <p><br></p> <p>We develop general techniques that address the challenges of designing sublinear DP algorithms. First, we introduce the concept of Coupled Global Sensitivity (CGS). Intuitively, the CGS of a randomized algorithm generalizes the classical  notion of global sensitivity of a function, by considering a coupling of the random coins of the algorithm when run on neighboring inputs. We show that one can achieve pure DP by adding Laplace noise proportional to the CGS of an algorithm. Second, we give a black box DP transformation for a specific class of approximation algorithms. We show that such algorithms can be made differentially private without sacrificing accuracy, as long as the function has small global sensitivity. In particular, this transformation gives rise to sublinear DP algorithms for many problems, including triangle counting, the weight of the minimum spanning tree, and norm estimation.</p>
6

LEVERAGING MULTIMODAL SENSING FOR ENHANCING THE SECURITY AND PRIVACY OF MOBILE SYSTEMS

Habiba Farrukh (13969653) 26 July 2023 (has links)
<p>Mobile systems, such as smartphones, wearables (e.g., smartwatches, AR/VR headsets),<br> and IoT devices, have come a long way from being just a method of communication to<br> sophisticated sensing devices that monitor and control several aspects of our lives. These<br> devices have enabled several useful applications in a wide range of domains ranging from<br> healthcare and finance to energy and agriculture industries. While such advancement has<br> enabled applications in several aspects of human life, it has also made these devices an<br> interesting target for adversaries.<br> In this dissertation, I specifically focus on how the various sensors on mobile devices can<br> be exploited by adversaries to violate users’ privacy and present methods to use sensors<br> to improve the security of these devices. My thesis posits that multi-modal sensing can be<br> leveraged to enhance the security and privacy of mobile systems.<br> In this, first, I describe my work that demonstrates that human interaction with mobile de-<br> vices and their accessories (e.g., stylus pencils) generates identifiable patterns in permissionless<br> mobile sensors’ data, which reveal sensitive information about users. Specifically, I developed<br> S3 to show how embedded magnets in stylus pencils impact the mobile magnetometer sensor<br> and can be exploited to infer a users incredibly private handwriting. Then, I designed LocIn<br> to infer a users indoor semantic location from 3D spatial data collected by mixed reality<br> devices through LiDAR and depth sensors. These works highlight new privacy issues due to<br> advanced sensors on emerging commodity devices.<br> Second, I present my work that characterizes the threats against smartphone authentication<br> and IoT device pairing and proposes usable and secure methods to protect against these threats.<br> I developed two systems, FaceRevelio and IoTCupid, to enable reliable and secure user and<br> device authentication, respectively, to protect users’ private information (e.g., contacts,<br> messages, credit card details) on commodity mobile and allow secure communication between<br> IoT devices. These works enable usable authentication on diverse mobile and IoT devices<br> and eliminate the dependency on sophisticated hardware for user-friendly authentication.</p>
7

NON-INTRUSIVE WIRELESS SENSING WITH MACHINE LEARNING

YUCHENG XIE (16558152) 30 August 2023 (has links)
<p>This dissertation explores the world of non-intrusive wireless sensing for diet and fitness activity monitoring, in addition to assessing security risks in human activity recognition (HAR). It delves into the use of WiFi and millimeter wave (mmWave) signals for monitoring eating behaviors, discerning intricate eating activities, and observing fitness movements. The proposed systems harness variations in wireless signal propagation to record human behavior while providing exhaustive details on dietary and exercise habits. Significant contributions encompass unsupervised learning methodologies for detecting dietary and fitness activities, implementing soft-decision and deep neural networks for assorted activity recognition, constructing tiny motion mechanisms for subtle mouth muscle movement recovery, employing space-time-velocity features for multi-person tracking, as well as utilizing generative adversarial networks and domain adaptation structures to enable less cumbersome training efforts and cross-domain deployments. A series of comprehensive tests validate the efficacy and precision of the proposed non-intrusive wireless sensing systems. Additionally, the dissertation probes the security vulnerabilities in mmWave-based HAR systems and puts forth various sophisticated adversarial attacks - targeted, untargeted, universal, and black-box. It designs adversarial perturbations aiming to deceive the HAR models whilst striving to minimize detectability. The research offers powerful insights into issues and efficient solutions relative to non-intrusive sensing tasks and security challenges linked with wireless sensing technologies.</p>
8

ENHANCING PRIVACY OF TRAINING DATA OF DEEP NEURAL NETWORKS ON EDGE USING TRUSTED EXECUTION ENVIRONMENTS

Gowri Ramshankar (18398499) 18 April 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) are deployed in many applications and protecting the privacy of training data has become a major concern. Membership Inference Attacks (MIAs) occur when an unauthorized person is able to determine whether a piece of data is used in training the DNNs. This paper investigates using Trusted Execution Environments (TEEs) in modern processors to protect the privacy of training data. Running DNNs on TEE, however, encounters many challenges, including limited computing and storage resources as well as a lack of development frameworks. This paper proposes a new method to partition pre-trained DNNs so that parts of the DNNs can fit into TEE to protect data privacy. The existing software infrastructure for running DNNs on TEE requires a significant amount of human effort using C programs. However, most existing DNNs are implemented using Python. This paper presents a framework that can automate most parts of the process of porting Python-based DNNs to TEE. The proposed method is deployed in Arm TrustZone-A on Raspberry Pi 3B+ with OPTEE-OS and evaluated on popular image classification models - AlexNet, ResNet, and VGG. Experimental results show that our method can reduce the accuracy of gradient-based MIAs on AlexNet, VGG- 16, and ResNet-20 evaluated on the CIFAR-100 dataset by 17.9%, 11%, and 35.3%. On average, processing an image in the native execution environment takes 4.3 seconds, whereas in the Trusted Execution Environment (TEE), it takes about 10.1 seconds per image.<br><br></p>
9

DIGITAL TRAILS IN VIRTUAL WORLDS: A FORENSIC INVESTIGATION OF VIRTUAL REALITY SOCIAL COMMUNITY APPLICATIONS ON OCULUS PLATFORMS

Samuel Li Feng Ho (17602290) 12 December 2023 (has links)
<p dir="ltr">Virtual Reality (VR) has become a pivotal element in modern society, transforming interactions with digital content and interpersonal communication. As VR integrates into various sectors, understanding its forensic potential is crucial for legal, investigative, and security purposes. This involves examining the digital footprints and artifacts left by immersive technologies. While previous studies in digital forensics have primarily concentrated on traditional computing devices such as smartphones and computers, research on VR, particularly on specific devices like the Oculus Go, Meta Quest, and Meta Quest 2, has been limited. This thesis explores the digital forensics of VR, focusing on the Oculus Go, Meta Quest and Meta Quest 2, using tools like Magnet AXIOM and Wireshark. The research uncovers specific forensic and network-based artifacts from eight social community applications, revealing user personally identifiable information, application usage history, WiFi network details, and multimedia content. These findings have significant implications for legal proceedings and cybercrime investigations, highlighting the role these artifacts can play in influencing the outcome of cases. This research not only deepens our understanding of VR-related digital forensics but also sets the stage for further investigations in this rapidly evolving domain.</p>
10

GENERAL-PURPOSE STATISTICAL INFERENCE WITH DIFFERENTIAL PRIVACY GUARANTEES

Zhanyu Wang (13893375) 06 December 2023 (has links)
<p dir="ltr">Differential privacy (DP) uses a probabilistic framework to measure the level of privacy protection of a mechanism that releases data analysis results to the public. Although DP is widely used by both government and industry, there is still a lack of research on statistical inference under DP guarantees. On the one hand, existing DP mechanisms mainly aim to extract dataset-level information instead of population-level information. On the other hand, DP mechanisms introduce calibrated noises into the released statistics, which often results in sampling distributions more complex and intractable than the non-private ones. This dissertation aims to provide general-purpose methods for statistical inference, such as confidence intervals (CIs) and hypothesis tests (HTs), that satisfy the DP guarantees. </p><p dir="ltr">In the first part of the dissertation, we examine a DP bootstrap procedure that releases multiple private bootstrap estimates to construct DP CIs. We present new DP guarantees for this procedure and propose to use deconvolution with DP bootstrap estimates to derive CIs for inference tasks such as population mean, logistic regression, and quantile regression. Our method achieves the nominal coverage level in both simulations and real-world experiments and offers the first approach to private inference for quantile regression.</p><p dir="ltr">In the second part of the dissertation, we propose to use the simulation-based ``repro sample'' approach to produce CIs and HTs based on DP statistics. Our methodology has finite-sample guarantees and can be applied to a wide variety of private inference problems. It appropriately accounts for biases introduced by DP mechanisms (such as by clamping) and improves over other state-of-the-art inference methods in terms of the coverage and type I error of the private inference. </p><p dir="ltr">In the third part of the dissertation, we design a debiased parametric bootstrap framework for DP statistical inference. We propose the adaptive indirect estimator, a novel simulation-based estimator that is consistent and corrects the clamping bias in the DP mechanisms. We also prove that our estimator has the optimal asymptotic variance among all well-behaved consistent estimators, and the parametric bootstrap results based on our estimator are consistent. Simulation studies show that our framework produces valid DP CIs and HTs in finite sample settings, and it is more efficient than other state-of-the-art methods.</p>

Page generated in 0.1578 seconds