31 |
Implementing and Investigating Partial Consent for Privacy Management of AndroidNallamilli, Mohan Krishna Reddy, Jagatha, Satya Venkat Naidu January 2022 (has links)
Background: Data privacy and security has been a big concern in recent years. Data privacy is a concern for everybody who owns a smartphone or accesses a website. This is due to the applications that have been installed on the device or the cookies that have been acquired via websites in the form of advertising cookies. Advertising cookies within programs or sites that track user content provide access to all of the user’s personal sensitive data. The viability of applying conditional consent to boost consumers’ trust in sharing their data is examined in this study. We assess the societal and technological implications of conditional consent implementation. This is accomplished by integrating a third option – maybe – into the access control mechanism. Research Idea: After reviewing all of the issues concerning user privacy breaches in android applications, we came up with the idea of implementing a Maybe option in which the user can grant access to the permissions for a specified period of time and then automatically disable those permissions at the end of that period. Objectives and Research Methods: The primary goal of our work is to determine the feasibility of implementing partial consent on Android applications, as well as how users understand and are willing to use this suggested option. We chose Experiment, Systematic mapping study, and survey as our study methods. Results: We built a permissions application prototype and provided an option maybe where the user may grant rights for a certain period of time and then automatically deactivate the permissions. Using a poll, many people chose the offered choice and fully comprehended the Maybe option. Conclusions: We understood the usability aspect of the proposed option. The respondents accepted the proposed option and felt the desire for the proposed option. This can cause a change in the security aspects of providing data to the third party applications. Keywords: Partial consent, Access control, Data Privacy, Data Security, Usability Aspect.
|
32 |
Secure Data Service Outsourcing with Untrusted CloudXiong, Huijun 10 June 2013 (has links)
Outsourcing data services to the cloud is a nature fit for cloud usage. However, increasing security and privacy concerns from both enterprises and individuals on their outsourced data inhibit this trend. In this dissertation, we introduce service-centric solutions to address two types of security threats existing in the current cloud environments: semi-honest cloud providers and malicious cloud customers. Our solution aims not only to provide confidentiality and access controllability of outsourced data with strong cryptographic guarantee, but, more importantly, to fulfill specific security requirements from different cloud services with effective systematic ways.
To provide strong cryptographic guarantee to outsourced data, we study the generic security problem caused by semi-honest cloud providers and introduce a novel proxy-based secure data outsourcing scheme. Specifically, our scheme improves the efficiency of traditional proxy re-encryption algorithm by integrating symmetric encryption and proxy re-encryption algorithms. With less computation cost on applying re-encryption operation directly on the encrypted data, our scheme allows flexible and efficient user revocation without revealing underlying data and heavy computation in the untrusted cloud.
To address specific requirement from different cloud services, we investigate two specific cloud services: cloud-based content delivery service and cloud-based data processing service. For the former one, we focus on preserving cache property in the content delivery network and propose CloudSeal, a scheme for securely and flexibly sharing and distributing content via the public cloud. With the ability of caching the major part of a stored cipher content object in the delivery network for content distribution and keeping the minor part with the data owner for content authorization, CloudSeal achieves security and efficiency both theoretically and experimentally. For the later service, we design and realize CloudSafe, a framework that supports secure and efficient data processing with minimum key leakage in the vulnerable cloud virtualization environment. Through the adoption of one-time cryptographic key strategy and a centralized key management framework, CloudSafe efficiently avoids cross-VM side channel attack from malicious cloud customers in the cloud. Our experimental results confirm the practicality and scalability of CloudSafe. / Ph. D.
|
33 |
OODOOLL : Exploring the Potential of Data Physicalisations to Increase Awareness and Control of Personal Data PrivacySkavron, Sarah January 2023 (has links)
Through online activities, we produce a large amount of personal data traces every day. Many people acknowledge the significance of protecting personal data online but they might not act accordingly. This thesis project seeks to make these often hidden traces visible and thus understandable through a data physicalisation. This is to increase awareness and knowledge as well as spark reflection on how and if certain data should be protected. Six design activities with a focus on the active involvement of participants were conducted to create the concept of “OODOOLL”, a reversed voodoo doll that has the purpose of protecting users from any potential harm in relation to online activity and sparking reflection around the topic of data privacy. While some of the potentials of a data physicalisation, such as initiating conversations or increased levels of self-reflection, could be realised, there were several limitations to the concept, i.e. technical limitations or breaking down the complexity of aggregated data use. Especially in terms of the increased use of digital devices and the rise of emerging technologies, it is important for general users but also for interaction designers to be aware and have an understanding of data collection, data use and data protection.
|
34 |
Phishing : A qualitative study of users' e-mail classification process, and how it is influenced by the subjective knowledgePuke Andersson, Hanna, Stenberg, Sofie January 2022 (has links)
Background. E-mail phishing is a type of social engineering where the threat actor sends e-mails with the intention to, for example, gain sensitive information or gain access to sensitive assets. Anyone can be a target of a phishing attempt, and any user that uses a digital environment should be aware of which factors to be attentive to in an e-mail. Objectives. This thesis intends to study the practical ability to identify phishing e-mails among users and what factors they are looking for when performing the classification. The intention is also to investigate if subjective knowledge impacts practical ability. Methods. A user study was conducted where the participants were to classify e-mails from an inbox as either phishing or legitimate. During the observation, the participants thought-out-loud for the authors of this thesis to hear their approach and which factors they noticed. A questionnaire also was conducted to capture the participants' knowledge, previous experience, and confidence in their classifications. Results. The results show that the majority of the participants did not know what factors to look after, nor how to inspect them, to make a justified classification of an e-mail. Most participants made the classifications based on their gut feelings. Those participants who had any theoretical knowledge showed more confidence and identified more phishing attempts. Conclusions. This thesis concluded that the participants lacked the required knowledge to identify phishing attempts. Further, it concludes that subjective knowledge leads to high confidence, which helps users make the correct classification. Therefore, this topic needs to be further enlightened to bring more awareness, and education needs to be conducted.
|
35 |
Data Security and Privacy under the Binary CloakJi, Tianxi 26 August 2022 (has links)
No description available.
|
36 |
Private and Secure Data Communication: Information Theoretic ApproachBasciftci, Yuksel O., Basciftci January 2016 (has links)
No description available.
|
37 |
Achieving Data Privacy and Security in CloudHuang, Xueli January 2016 (has links)
The growing concerns in term of the privacy of data stored in public cloud have restrained the widespread adoption of cloud computing. The traditional method to protect the data privacy is to encrypt data before they are sent to public cloud, but heavy computation is always introduced by this approach, especially for the image and video data, which has much more amount of data than text data. Another way is to take advantage of hybrid cloud by separating the sensitive data from non-sensitive data and storing them in trusted private cloud and un-trusted public cloud respectively. But if we adopt the method directly, all the images and videos containing sensitive data have to be stored in private cloud, which makes this method meaningless. Moreover, the emergence of the Software-Defined Networking (SDN) paradigm, which decouples the control logic from the closed and proprietary implementations of traditional network devices, enables researchers and practitioners to design new innovative network functions and protocols in a much easier, flexible, and more powerful way. The data plane will ask the control plane to update flow rules when the data plane gets new network packets with which it does not know how to deal with, and the control plane will then dynamically deploy and configure flow rules according to the data plane's requests, which makes the whole network could be managed and controlled efficiently. However, this kind of reactive control model could be used by hackers launching Distributed Denial-of-Service (DDoS) attacks by sending large amount of new requests from the data plane to the control plane. For image data, we divide the image is into pieces with equal size to speed up the encryption process, and propose two kinds of method to cut the relationship between the edges. One is to add random noise in each piece, the other is to design a one-to-one mapping function for each piece to map different pixel value into different another one, which cuts off the relationship between pixels as well the edges. Our mapping function is given with a random parameter as inputs to make each piece could randomly choose different mapping. Finally, we shuffle the pieces with another random parameter, which makes the problems recovering the shuffled image to be NP-complete. For video data, we propose two different methods separately for intra frame, I-frame, and inter frame, P-frame, based on their different characteristic. A hybrid selective video encryption scheme for H.264/AVC based on Advanced Encryption Standard (AES) and video data themselves is proposed for I-frame. For each P-slice of P-frame, we only abstract small part of them in private cloud based on the characteristic of intra prediction mode, which efficiently prevents P-frame being decoded. For cloud running with SDN, we propose a framework to keep the controller away from DDoS attack. We first predict the amount of new requests for each switch periodically based on its previous information, and the new requests will be sent to controller if the predicted total amount of new requests is less than the threshold. Otherwise these requests will be directed to the security gate way to check if there is a attack among them. The requests that caused the dramatic decrease of entropy will be filter out by our algorithm, and the rules of these request will be made and sent to controller. The controller will send the rules to each switch to make them direct the flows matching with the rules to honey pot. / Computer and Information Science
|
38 |
ENHANCING PRIVACY OF TRAINING DATA OF DEEP NEURAL NETWORKS ON EDGE USING TRUSTED EXECUTION ENVIRONMENTSGowri Ramshankar (18398499) 18 April 2024 (has links)
<p dir="ltr">Deep Neural Networks (DNNs) are deployed in many applications and protecting the privacy of training data has become a major concern. Membership Inference Attacks (MIAs) occur when an unauthorized person is able to determine whether a piece of data is used in training the DNNs. This paper investigates using Trusted Execution Environments (TEEs) in modern processors to protect the privacy of training data. Running DNNs on TEE, however, encounters many challenges, including limited computing and storage resources as well as a lack of development frameworks. This paper proposes a new method to partition pre-trained DNNs so that parts of the DNNs can fit into TEE to protect data privacy. The existing software infrastructure for running DNNs on TEE requires a significant amount of human effort using C programs. However, most existing DNNs are implemented using Python. This paper presents a framework that can automate most parts of the process of porting Python-based DNNs to TEE. The proposed method is deployed in Arm TrustZone-A on Raspberry Pi 3B+ with OPTEE-OS and evaluated on popular image classification models - AlexNet, ResNet, and VGG. Experimental results show that our method can reduce the accuracy of gradient-based MIAs on AlexNet, VGG- 16, and ResNet-20 evaluated on the CIFAR-100 dataset by 17.9%, 11%, and 35.3%. On average, processing an image in the native execution environment takes 4.3 seconds, whereas in the Trusted Execution Environment (TEE), it takes about 10.1 seconds per image.<br><br></p>
|
39 |
TECHNIQUES TO SECURE AND MONITOR CLIENT DATABASE APPLICATIONSDaren Khaled Fadolalkarim (19200958) 23 July 2024 (has links)
<p dir="ltr">In this thesis, we aim at securing database applications in different ways. We have designed, implemented and experimentally evaluated two systems, AD-PROM and DCAFixer. AD-PROM has the goal to monitor database application while running to detect changes in applications’ behaviors at run time. DCAFixer, focus on securing database applications at the early development stages, i.e., coding and testing.</p>
|
40 |
Adversarial Attacks Against Network Intrusion Detection SystemsSanidhya Sharma (19203919) 26 July 2024 (has links)
<p dir="ltr">The explosive growth of computer networks over the past few decades has significantly enhanced communication capabilities. However, this expansion has also attracted malicious attackers seeking to compromise and disable these networks for personal gain. Network Intrusion Detection Systems (NIDS) were developed to detect threats and alert users to potential attacks. As the types and methods of attacks have grown exponentially, NIDS have struggled to keep pace. A paradigm shift occurred when NIDS began using Machine Learning (ML) to differentiate between anomalous and normal traffic, alleviating the challenge of tracking and defending against new attacks. However, the adoption of ML-based anomaly detection in NIDS has unraveled a new avenue of exploitation due to the inherent inadequacy of machine learning models - their susceptibility to adversarial attacks.</p><p dir="ltr">In this work, we explore the application of adversarial attacks from the image domain to bypass Network Intrusion Detection Systems (NIDS). We evaluate both white-box and black-box adversarial attacks against nine popular ML-based NIDS models. Specifically, we investigate Projected Gradient Descent (PGD) attacks on two ML models, transfer attacks using adversarial examples generated by the PGD attack, the score-based Zeroth Order Optimization attack, and two boundary-based attacks, namely the Boundary and HopSkipJump attacks. Through comprehensive experiments using the NSL-KDD dataset, we find that logistic regression and multilayer perceptron models are highly vulnerable to all studied attacks, whereas decision trees, random forests, and XGBoost are moderately vulnerable to transfer attacks or PGD-assisted transfer attacks with approximately 60 to 70% attack success rate (ASR), but highly susceptible to targeted HopSkipJump or Boundary attacks with close to a 100% ASR. Moreover, SVM-linear is highly vulnerable to both transfer attacks and targeted HopSkipJump or Boundary attacks achieving around 100% ASR, whereas SVM-rbf is highly vulnerable to transfer attacks with a 77% ASR but only moderately to targeted HopSkipJump or Boundary attacks with a 52% ASR. Finally, both KNN and Label Spreading models exhibit robustness against transfer-based attacks with less than 30% ASR but are highly vulnerable to targeted HopSkipJump or Boundary attacks with a 100% ASR with a large perturbation. Our findings may provide insights for designing future NIDS that are robust against potential adversarial attacks.</p>
|
Page generated in 0.0291 seconds