• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Security of Big Data: Focus on Data Leakage Prevention (DLP)

Nyarko, Richard January 2018 (has links)
Data has become an indispensable part of our daily lives in this era of information age. The amount of data which is generated is growing exponentially due to technological advances. This voluminous of data which is generated daily has brought about new term which is referred to as big data. Therefore, security is of great concern when it comes to securing big data processes. The survival of many organizations depends on the preventing of these data from falling into wrong hands. Because if these sensitive data fall into wrong hands it could cause serious consequences. For instance, the credibility of several businesses or organizations will be compromised when sensitive data such as trade secrets, project documents, and customer profiles are leaked to their competitors (Alneyadi et al, 2016).  In addition, the traditional security mechanisms such as firewalls, virtual private networks (VPNs), and intrusion detection systems/intrusion prevention systems (IDSs/IPSs) are not enough to prevent against the leakage of such sensitive data. Therefore, to overcome this deficiency in protecting sensitive data, a new paradigm shift called data leakage prevention systems (DLPSs) have been introduced. Over the past years, many research contributions have been made to address data leakage. However, most of the past research focused on data leakage detection instead of preventing against the leakage. This thesis contributes to research by using the preventive approach of DLPS to propose hybrid symmetric-asymmetric encryption to prevent against data leakage.  Also, this thesis followed the Design Science Research Methodology (DSRM) with CRISP-DM (CRoss Industry Standard Process for Data Mining) as the kernel theory or framework for the designing of the IT artifact (method). The proposed encryption method ensures that all confidential or sensitive documents of an organization are encrypted so that only users with access to the decrypting keys can have access. This is achieved after the documents have been classified into confidential and non-confidential ones with Naïve Bayes Classifier (NBC).  Therefore, any organizations that need to prevent against data leakage before the leakage occurs can make use of this proposed hybrid encryption method.
2

Mining Security Risks from Massive Datasets

Liu, Fang 09 August 2017 (has links)
Cyber security risk has been a problem ever since the appearance of telecommunication and electronic computers. In the recent 30 years, researchers have developed various tools to protect the confidentiality, integrity, and availability of data and programs. However, new challenges are emerging as the amount of data grows rapidly in the big data era. On one hand, attacks are becoming stealthier by concealing their behaviors in massive datasets. One the other hand, it is becoming more and more difficult for existing tools to handle massive datasets with various data types. This thesis presents the attempts to address the challenges and solve different security problems by mining security risks from massive datasets. The attempts are in three aspects: detecting security risks in the enterprise environment, prioritizing security risks of mobile apps and measuring the impact of security risks between websites and mobile apps. First, the thesis presents a framework to detect data leakage in very large content. The framework can be deployed on cloud for enterprise and preserve the privacy of sensitive data. Second, the thesis prioritizes the inter-app communication risks in large-scale Android apps by designing new distributed inter-app communication linking algorithm and performing nearest-neighbor risk analysis. Third, the thesis measures the impact of deep link hijacking risk, which is one type of inter-app communication risks, on 1 million websites and 160 thousand mobile apps. The measurement reveals the failure of Google's attempts to improve the security of deep links. / Ph. D.
3

Improving DLP system security / Förbättring av säkerheten av DLP system

Ghorbanian, Sara, Fryklund, Glenn January 2014 (has links)
Context. Data leakage prevention (DLP), a system designed to prevent leakage and loss of secret sensitive data and at the same time not affect employees workflow. The aim is to have a system covering every possible leakage point that exist. Even if these are covered, there are ways of hiding information such as obfuscating a zip archive within an image file, detecting this hidden information and preventing it from leaking is a difficult task. Companies pay a great deal for these solutions and yet, as we uncover, the information is not safe. Objectives. In this thesis we evaluate four different existing types of DLP systems out on the market today, disclosing their weaknesses and found ways of improving their security. Methods. The four DLP systems tested in this study cover agentless, agent based, hybrids and regular expression DLP tools. The test cases simulate potential leakage points via every day used file transfer applications and media such as USB, Skype, email, etc. Results. We present a hypothetical solution in order to amend these weaknesses and to improve the efficiency of DLP systems today. In addition to these evaluations and experiments, a complementing proof of concept solution has been developed that can be integrated with other DLP solutions. Conclusions. We conclude that the exisiting DLP systems are still in need of improvement, none of the tested DLP solutions fully covered the possible leakage points that could exist in the corporate world. There is a need for continued evaluation of DLP systems, aspects and leakage points not covered in this thesis as well as a follow up on our suggested solution.
4

Combating Data Leakage in the Cloud

Dlamini, Moses Thandokuhle January 2020 (has links)
The increasing number of reports on data leakage incidents increasingly erodes the already low consumer confidence in cloud services. Hence, some organisations are still hesitant to fully trust the cloud with their confidential data. Therefore, this study raises a critical and challenging research question: How can we restore the damaged consumer confidence and improve the uptake and security of cloud services? This study makes a plausible attempt at unpacking and answering the research question in order to holistically address the data leakage problem from three fronts, i.e. conflict-aware virtual machine (VM) placement, strong authentication and digital forensic readiness. Consequently, this study investigates, designs and develops an innovative conceptual architecture that integrates conflict-aware VM placement, cutting-edge authentication and digital forensic readiness to strengthen cloud security and address the data leakage problem in the hope of eventually restoring consumer confidence in cloud services. The study proposes and presents a conflict-aware VM placement model. This model uses varying degrees of conflict tolerance levels, the construct of sphere of conflict and sphere of non-conflict. These are used to provide the physical separation of VMs belonging to conflicting tenants that share the same cloud infrastructure. The model assists the cloud service provider to make informed VM placement decisions that factor in their tenants’ security profile and balance it against the relevant cost constraints and risk appetite. The study also proposes and presents a strong risk-based multi-factor authentication mechanism that scales up and down, based on threat levels or risks posed on the system. This ensures that users are authenticated using the right combination of access credentials according to the risk they pose. This also ensures end-to-end security of authentication data, both at rest and in transit, using an innovative cryptography system and steganography. Furthermore, the study proposes and presents a three-tier digital forensic process model that proactively collects and preserves digital evidence in anticipation of a legal lawsuit or policy breach investigation. This model aims to reduce the time it takes to conduct an investigation in the cloud. Moreover, the three-tier digital forensic readiness process model collects all user activity in a forensically sound manner and notifies investigators of potential security incidents before they occur. The current study also evaluates the effectiveness and efficiency of the proposed solution in addressing the data leakage problem. The results of the conflict-aware VM placement model are derived from simulated and real cloud environments. In both cases, the results show that the conflict-aware VM placement model is well suited to provide the necessary physical isolation of VM instances that belong to conflicting tenants in order to prevent data leakage threats. However, this comes with a performance cost in the sense that higher conflict tolerance levels on bigger VMs take more time to be placed, compared to smaller VM instances with low conflict tolerance levels. From the risk-based multifactor authentication point of view, the results reflect that the proposed solution is effective and to a certain extent also efficient in preventing unauthorised users, armed with legitimate credentials, from gaining access to systems that they are not authorised to access. The results also demonstrate the uniqueness of the approach in that even minor deviations from the norm are correctly classified as anomalies. Lastly, the results reflect that the proposed 3-tier digital forensic readiness process model is effective in the collection and storage of potential digital evidence. This is done in a forensically sound manner and stands to significantly improve the turnaround time of a digital forensic investigation process. Although the classification of incidents may not be perfect, this can be improved with time and is considered part of the future work suggested by the researcher. / Thesis (PhD)--University of Pretoria, 2020. / Computer Science / PhD / Unrestricted
5

Privacy Issues in IoT : Privacy concerns in smart home

Mao, Congcong January 2019 (has links)
In a world of the Internet of Things, smart home has shown a great potency and trend. A smart home is a convenient home setup where appliances and devices can be automatically controlled remotely from any internet-connected place in the world using a mobile or other networked device. Smart home has changed the way the residents interacted with their home and realised more convenience. Although this technology also has positive impact on saving energy and resources, privacy issues in it have shown to one of the biggest obstacles to the adaption of this technology. The purpose of this thesis is to study smart home users’ perceptions of smart homes and their privacy awareness and concerns. The research was conducted through interviews and followed an interpretive research paradigm and a qualitative research approach. In this study, 5 smart home owners were interviewed to investigate their reasons for purchasing IoT devices, their perceptions of smart home privacy risks, and actions to protect their privacy, as well as managing IoT devices and/or its data. The research results show that there are privacy risks existing in smart homes. Consumers’ privacy data is collected secretly, which needs to be controlled, and privacy issues have to be addressed in the near future for the smart home to be fully adopted by the society.
6

Gestion du contrôle de la diffusion des données d’entreprises et politiques de contrôles d’accès / Access control policies and companies data transmission management

Bertrand, Yoann 22 March 2017 (has links)
Cette thèse traite des problèmes de fuite de données accidentelles au sein des entreprises. Ces fuites peuvent être dues à l’utilisation conjointe de politiques de Contrôle d’Accès (CA) et de Contrôle de Transmission (CT). De plus, l’utilisation conjointe de ces deux types de politique génère plusieurs problèmes pour les personnes ayant la charge de créer et maintenir ces politiques. Parmi ces problèmes, nous pouvons citer des problèmes de généricité des modèles existants, de cohérence entre les règles de CA et de CT ainsi que des problèmes de densité, d’adaptabilité, d’interopérabilité et de réactivité. Dans cette thèse, nous proposons en premier lieu un méta-modèle pour prendre en compte la plupart des modèles de CA utilisés dans les entreprises. Nous proposons ensuite la génération cohérente et semi-automatique des politiques de CT à partir de politiques de CA existantes pour répondre au problème de cohérence. De plus, différentes fonctionnalités sont proposées pour résoudre les problèmes de densité, d’adaptabilité et d’interopérabilité. Afin de valider la pertinence de notre solution, nous proposons une étude (type questionnaire) auprès d’experts sécurité et d’administrateurs. Cette étude révèle des informations sur la taille des politiques gérées, la pénibilité à les définir ou encore l’utilité des fonctionnalités proposées pour résoudre les problèmes précédents. Enfin, nous testons notre preuve de concept sur des données aléatoires et réelles en prenant en compte les performances et la réactivité, validant ainsi que notre solution répond bien aux problèmes soulevés. / The main objective of this thesis is to solve the problem of unintentional data leakage within companies. These leaks can be caused by the use of both Access Control (AC) and Transmission Control (TC) policies. Moreover, using both AC and TC can lead to many problems for the security experts and the administrators that are in charge of the definition and maintenance of such policies. Among these problems, we can underline the genericity problem of existing models, the coherence problem between AC and TC rules and problems such as density, adaptability, interoperability and reactivity. In this thesis, we first define a meta-model to take into account the main AC models that are used within companies. We also propose a coherent and semi-automatic generation of TC policies based on existing AC to tackle the coherence problem. Moreover, several mechanisms have been proposed to tackle complexity, adaptability and interoperability issues. In order to validate the relevance of our solution, we have first conducted a survey among security experts and administrators. This survey has highlighted several information regarding the policies’ size and density, the tiresomeness of having to define them and the interest for several functionalities that can cover the aforementioned problems. Finally, our solution has been tested on stochastically generated and real policies in order to take performances and reactivity under consideration. Results of these tests have validated that our solution covers the underlined problems.
7

Gestion de la collaboration et compétition dans le crowdsourcing : une approche avec prise en compte de fuites de données via les réseaux sociaux / Managing collaboration and competition in crowdsourcing : approach that takes into account data leakage via social networks

Ben Amor, Iheb 27 November 2014 (has links)
Le crowdsourcing est une pratique permettant aux entreprises de faire appel à l’intelligence humaine à grande échelle afin d’apporter des solutions à des problématiques qu’elles souhaitent externaliser. Les problématiques externalisées sont de plus en plus complexes et ne peuvent être résolues individuellement. Nous proposons dans cette thèse une approche appelée SocialCrowd, contribuant à améliorer la qualité des résultats de crowdsourcing. Elle consiste à faire collaborer les participants afin d’unir leur capacité de résolution et apporter des solutions aux problèmes externalisés de plus en plus complexes. Les groupes collaboratifs sont mis en compétition, via des rétributions attrayantes, afin d’obtenir de meilleures résolutions. Par ailleurs, il est nécessaire de protéger les données privées des groupes en compétition. Nous utilisons les réseaux sociaux comme support de fuite de données. Nous proposons une approche basée sur l’algorithme Dijkstra pour estimer la probabilité de propagation de données privées d’un membre sur le réseau social. Ce calcul est complexe étant donné la taille des réseaux sociaux. Une parallélisation du calcul est proposée suivant le modèle MapReduce. Un algorithme de classification basé sur le calcul de propagation dans les réseaux sociaux est proposé permettant de regrouper les participants en groupes collaboratifs et compétitifs tout en minimisant les fuites de données d’un groupe vers l’autre. Comme ce problème de classification est d’une complexité combinatoire, nous avons proposé un algorithme de classification basé sur les algorithmes d’optimisation combinatoires tels que le recuit simulé et les algorithmes génétiques. Etant donnée le nombre important de solutions possible, une approche basée sur le modèle du Soft Constraint Satisfaction Problem (SCSP) est proposée pour classer les différentes solutions. / Crowdsourcing is the practice of allowing companies to use human intelligence scale to provide solutions to issues they want to outsource. Outsourced issues are increasingly complex and cannot be resolved individually. We propose in this thesis an approach called SocialCrowd, helping to improve the quality of the results of crowdsourcing. It compromise to collaborate participants to unite solving ability and provide solutions to outsourced problems more and more complex. Collaborative groups are put in competition through attractive remuneration, in order to obtain better resolution. Furthermore, it is necessary to protect the private information of competing groups. We use social media as a support for data leakage. We propose an approach based on Dijkstra algorithm to estimate the propagation probability of private data member in the social network. Given the size of social networks, this computation is complex. Parallelization of computing is proposed according to the MapReduce model. A classification algorithm based on the calculation of propagations in social networks is proposed for grouping participants in collaborative and competitive groups while minimizing data leaks from one group to another. As this classification problem is a combinatorial complexity, we proposed a classification algorithm based on combinatorial optimization algorithms such as simulated annealing and genetic algorithms. Given the large number of feasible solutions, an approach based on the model of Soft Constraint Satisfaction Problem (SCSP) is proposed to classify the different solutions.
8

A Study of Random Partitions vs. Patient-Based Partitions in Breast Cancer Tumor Detection using Convolutional Neural Networks

Ramos, Joshua N 01 March 2024 (has links) (PDF)
Breast cancer is one of the deadliest cancers for women. In the US, 1 in 8 women will be diagnosed with breast cancer within their lifetimes. Detection and diagnosis play an important role in saving lives. To this end, many classifiers with varying structures have been designed to classify breast cancer histopathological images. However, randomly partitioning data, like many previous works have done, can lead to artificially inflated accuracies and classifiers that do not generalize. Data leakage occurs when researchers assume that every image in a dataset is independent of each other, which is often not the case for medical datasets, where multiple images are taken of each patient. This work focuses on convolutional neural network binary classifiers using the BreakHis dataset. Previous works are reviewed. Classifiers from previous literature are tested with patient partitioning, where individual patients are placed in the training, testing and validation sets so that there is no overlap. A classifier which previously achieved 93% accuracy consistently, only achieved 79% accuracy with the new patient partition. Robust data augmentation, a Sigmoid output layer and a different form of min-max normalization were utilized to achieve an accuracy of 89.38%. These improvements were shown to be effective with the architectures used. Sigmoid Model 1.1 is shown to perform well compared to much deeper architectures found in literature.
9

Smart Grid security : protecting users' privacy in smart grid applications

Mustafa, Mustafa Asan January 2015 (has links)
Smart Grid (SG) is an electrical grid enhanced with information and communication technology capabilities, so it can support two-way electricity and communication flows among various entities in the grid. The aim of SG is to make the electricity industry operate more efficiently and to provide electricity in a more secure, reliable and sustainable manner. Automated Meter Reading (AMR) and Smart Electric Vehicle (SEV) charging are two SG applications tipped to play a major role in achieving this aim. The AMR application allows different SG entities to collect users’ fine-grained metering data measured by users’ Smart Meters (SMs). The SEV charging application allows EVs’ charging parameters to be changed depending on the grid’s state in return for incentives for the EV owners. However, both applications impose risks on users’ privacy. Entities having access to users’ fine-grained metering data may use such data to infer individual users’ personal habits. In addition, users’ private information such as users’/EVs’ identities and charging locations could be exposed when EVs are charged. Entities may use such information to learn users’ whereabouts, thus breach their privacy. This thesis proposes secure and user privacy-preserving protocols to support AMR and SEV charging in an efficient, scalable and cost-effective manner. First, it investigates both applications. For AMR, (1) it specifies an extensive set of functional requirements taking into account the way liberalised electricity markets work and the interests of all SG entities, (2) it performs a comprehensive threat analysis, based on which, (3) it specifies security and privacy requirements, and (4) it proposes to divide users’ data into two types: operational data (used for grid management) and accountable data (used for billing). For SEV charging, (1) it specifies two modes of charging: price-driven mode and price-control-driven mode, and (2) it analyses two use-cases: price-driven roaming SEV charging at home location and price-control-driven roaming SEV charging at home location, by performing threat analysis and specifying sets of functional, security and privacy requirements for each of the two cases. Second, it proposes a novel Decentralized, Efficient, Privacy-preserving and Selective Aggregation (DEP2SA) protocol to allow SG entities to collect users’ fine-grained operational metering data while preserving users’ privacy. DEP2SA uses the homomorphic Paillier cryptosystem to ensure the confidentiality of the metering data during their transit and data aggregation process. To preserve users’ privacy with minimum performance penalty, users’ metering data are classified and aggregated accordingly by their respective local gateways based on the users’ locations and their contracted suppliers. In this way, authorised SG entities can only receive the aggregated data of users they have contracts with. DEP2SA has been analysed in terms of security, computational and communication overheads, and the results show that it is more secure, efficient and scalable as compared with related work. Third, it proposes a novel suite of five protocols to allow (1) suppliers to collect users accountable metering data, and (2) users (i) to access, manage and control their own metering data and (ii) to switch between electricity tariffs and suppliers, in an efficient and scalable manner. The main ideas are: (i) each SM to have a register, named accounting register, dedicated only for storing the user’s accountable data, (ii) this register is updated by design at a low frequency, (iii) the user’s supplier has unlimited access to this register, and (iv) the user cancustomise how often this register is updated with new data. The suite has been analysed in terms of security, computational and communication overheads. Fourth, it proposes a novel protocol, known as Roaming Electric Vehicle Charging and Billing, an Anonymous Multi-User (REVCBAMU) protocol, to support the priced-driven roaming SEV charging at home location. During a charging session, a roaming EV user uses a pseudonym of the EV (known only to the user’s contracted supplier) which is anonymously signed by the user’s private key. This protocol protects the user’s identity privacy from other suppliers as well as the user’s privacy of location from its own supplier. Further, it allows the user’s contracted supplier to authenticate the EV and the user. Using two-factor authentication approach a multi-user EV charging is supported and different legitimate EV users (e.g., family members) can be held accountable for their charging sessions. With each charging session, the EV uses a different pseudonym which prevents adversaries from linking the different charging sessions of the same EV. On an application level, REVCBAMU supports fair user billing, i.e., each user pays only for his/her own energy consumption, and an open EV marketplace in which EV users can safely choose among different remote host suppliers. The protocol has been analysed in terms of security and computational overheads.
10

Monitoring a analýza uživatelů systémem DLP / Monitoring and Analysis of Users Using DLP System

Pandoščák, Michal January 2011 (has links)
The purpose of this masters thesis is to study issues of monitoring and analysis of users using DLP (Data Loss Prevention) system, the definition of internal and external attacks, the description of the main parts of the DLP system, managing of politic, monitoring user activities and classifying the data content. This paper explains the difference between contextual and content analysis and describes their techniques. It shows the fundamentals of network and endpoint monitoring and describes the process and users activities which may cause a data leakage. Lastly, we have developed endpoint protection agent who serves to the monitoring activities at a terminal station.

Page generated in 0.0459 seconds