51 |
Toward privacy-preserving component certification for metal additive manufacturingBappy, Mahathir Mohammad 13 August 2024 (has links) (PDF)
Metal-based additive manufacturing (AM) has emerged as a cutting-edge technology for fabricating complex geometries with high precision. However, the major challenges to the wider adoption of metal AM technologies are process uncertainty-induced quality issues. Consequently, there is an urgent need for fast and reliable certification techniques for AM components, which can be achieved by leveraging Artificial Intelligence (AI)-enabled modeling. Developing a robust AI-enabled model presents a significant challenge because of the costly and time-intensive nature of acquiring diverse and high volume of datasets. In this context, the data-sharing attributes of Manufacturing-as-a-Service (MaaS) platforms can facilitate the development of AI-enabled certification techniques in a collaborative manner. However, sharing process data poses critical concerns about protecting users’ intellectual property and privacy since it contains confidential product design information. To address these challenges, the overarching goal of this research is to investigate how process data and process physics can be leveraged to develop in-situ component certification techniques focusing on data privacy for metal AM systems. This dissertation aims to address the need for novel quality monitoring methodologies by utilizing diverse data sources derived from a range of printed samples. Specifically, the research effort focuses on 1) the use of in-situ thermal history data and ex-situ X-ray computed tomography data for real-time layer-wise anomaly detection method development by analyzing the morphological dynamics of melt pool images; 2) the development of a framework to evaluate the design information disclosure of various thermal history-based feature extraction methods for anomaly detection; and 3) the privacy-preserving and utility-aware adaptive AM data deidentification method development that takes thermal history data as input.
|
52 |
Web services oriented approach for privacy-preserving data sharing / Une approche orientée service pour la préservation des données confidentielles dans les compositions de services WebTbahriti, Salah Eddine 03 December 2012 (has links)
Bien que la technologie de composition de services Web soit considérée comme l’une des technologies les plus prometteuses pour l’intégration des sources de données hétérogènes et multiples ainsi que pour la réalisation d’opérations complexes, la question de la protection des données personnelles demeure l’une des préoccupation majeure liés à cette technologie. Ainsi, lors d’un processus de composition, l’échange de données entre toutes les entités – tels que, les services Web recueillant et fournissant des données, les individus dont les données peuvent être fournies et gérées par les services Web, les systèmes qui composent les services Web et les clients finaux des services – est une étape nécessaire et indispensable pour répondre à des requêtes complexes. En conséquence, des données personnelles sont échangées et manipulées entre toutes les entités du système. Notre objectif dans cette thèse est la conception et le développement d’un cadre permettant d’améliorer la composition des services Web avec des mécanismes de protection des données personnelles. Pour atteindre cet objectif, nous avons proposé une approche générale composée de trois éléments. Premièrement, nous avons proposé un modèle formel de confidentialité pour permettre aux services Web de décrire leurs contraintes de confidentialité liées aux données personnelles. Notre modèle permet une spécification des contraintes de confidentialité relative non seulement au niveau des données manipulées, mais aussi au niveau des opérations invoquées par les services. Deuxièmement, nous développons un algorithme de compatibilité qui permet de vérifier formellement la compatibilité entre les exigences et les politiques de confidentialité de tous les services lors d’un processus de composition. Troisièmement, dans le cas où certains services dans la composition sont incompatibles par rapport à leur spécification de confidentialité, nous avons introduit une nouvelle approche basée sur un modèle de négociation dans la perspective de trouver une composition compatible (c’est-à-dire, d’obtenir la compatibilité de toutes les spécifications de confidentialité des services impliqués dans la composition). Enfin, nous avons mis en œuvre les techniques présentées dans cette thèse au sein du prototype PAIRSE et mené une étude de performance sur les algorithmes proposés / While Web service composition technologies have been beneficial to the integration of a wealth of information sources and the realization of complex and personalized operations, the issue of privacy is considered by many as a major concern in services computing. Central to the development of the composition process is the exchange of sensitive and private data between all parties: Web services collecting and providing data, individuals whose data may be provided and managed by Web services, systems composing Web service to answer complex queries, and requesters. As a consequence, managing privacy between all parties of the system is far from being an easy task. Our goal in this thesis is to build the foundations of an integrated framework to enhance Web service composition with privacy protection capabilities. To this aim, we firstly propose a formal privacy model to allow Web services to describe their privacy specifications. Our privacy model goes beyond traditional data-oriented models by dealing with privacy not only at the data level but also service level. Secondly, we develop a compatibility-matching algorithm to check privacy compatibility between privacy requirements and policies within a composition. Thirdly, in the case where some services in the composition are incompatible regarding their privacy specifications, we introduce a novel approach based on a negotiation model to reach compatibility of concerned services (i.e. services that participate in a composition which are incompatible). Finally, we conduct an extensive performance study of the proposed algorithms. The techniques presented in this dissertation are implemented in PAIRSE prototype
|
53 |
Préservation de la confidentialité des données externalisées dans le traitement des requêtes top-k / Privacy preserving top-k query processing over outsourced dataMahboubi, Sakina 21 November 2018 (has links)
L’externalisation de données d’entreprise ou individuelles chez un fournisseur de cloud, par exemple avec l’approche Database-as-a-Service, est pratique et rentable. Mais elle introduit un problème majeur: comment préserver la confidentialité des données externalisées, tout en prenant en charge les requêtes expressives des utilisateurs. Une solution simple consiste à crypter les données avant leur externalisation. Ensuite, pour répondre à une requête, le client utilisateur peut récupérer les données cryptées du cloud, les décrypter et évaluer la requête sur des données en texte clair (non cryptées). Cette solution n’est pas pratique, car elle ne tire pas parti de la puissance de calcul fournie par le cloud pour évaluer les requêtes.Dans cette thèse, nous considérons un type important de requêtes, les requêtes top-k, et le problème du traitement des requêtes top-k sur des données cryptées dans le cloud, tout en préservant la vie privée. Une requête top-k permet à l’utilisateur de spécifier un nombre k de tuples les plus pertinents pour répondre à la requête. Le degré de pertinence des tuples par rapport à la requête est déterminé par une fonction de notation.Nous proposons d’abord un système complet, appelé BuckTop, qui est capable d’évaluer efficacement les requêtes top-k sur des données cryptées, sans avoir à les décrypter dans le cloud. BuckTop inclut un algorithme de traitement des requêtes top-k qui fonctionne sur les données cryptées, stockées dans un nœud du cloud, et retourne un ensemble qui contient les données cryptées correspondant aux résultats top-k. Il est aidé par un algorithme de filtrage efficace qui est exécuté dans le cloud sur les données chiffrées et supprime la plupart des faux positifs inclus dans l’ensemble renvoyé. Lorsque les données externalisées sont volumineuses, elles sont généralement partitionnées sur plusieurs nœuds dans un système distribué. Pour ce cas, nous proposons deux nouveaux systèmes, appelés SDB-TOPK et SD-TOPK, qui permettent d’évaluer les requêtes top-k sur des données distribuées cryptées sans avoir à les décrypter sur les nœuds où elles sont stockées. De plus, SDB-TOPK et SD-TOPK ont un puissant algorithme de filtrage qui filtre les faux positifs autant que possible dans les nœuds et renvoie un petit ensemble de données cryptées qui seront décryptées du côté utilisateur. Nous analysons la sécurité de notre système et proposons des stratégies efficaces pour la mettre en œuvre.Nous avons validé nos solutions par l’implémentation de BuckTop, SDB-TOPK et SD-TOPK, et les avons comparé à des approches de base par rapport à des données synthétiques et réelles. Les résultats montrent un excellent temps de réponse par rapport aux approches de base. Ils montrent également l’efficacité de notre algorithme de filtrage qui élimine presque tous les faux positifs. De plus, nos systèmes permettent d’obtenir une réduction significative des coûts de communication entre les nœuds du système distribué lors du calcul du résultat de la requête. / Outsourcing corporate or individual data at a cloud provider, e.g. using Database-as-a-Service, is practical and cost-effective. But it introduces a major problem: how to preserve the privacy of the outsourced data, while supporting powerful user queries. A simple solution is to encrypt the data before it is outsourced. Then, to answer a query, the user client can retrieve the encrypted data from the cloud, decrypt it, and evaluate the query over plaintext (non encrypted) data. This solution is not practical, as it does not take advantage of the computing power provided by the cloud for evaluating queries.In this thesis, we consider an important kind of queries, top-k queries,and address the problem of privacy-preserving top-k query processing over encrypted data in the cloud.A top-k query allows the user to specify a number k, and the system returns the k tuples which are most relevant to the query. The relevance degree of tuples to the query is determined by a scoring function.We first propose a complete system, called BuckTop, that is able to efficiently evaluate top-k queries over encrypted data, without having to decrypt it in the cloud. BuckTop includes a top-k query processing algorithm that works on the encrypted data, stored at one cloud node,and returns a set that is proved to contain the encrypted data corresponding to the top-k results. It also comes with an efficient filtering algorithm that is executed in the cloud on encypted data and removes most of the false positives included in the set returned.When the outsourced data is big, it is typically partitioned over multiple nodes in a distributed system. For this case, we propose two new systems, called SDB-TOPK and SD-TOPK, that can evaluate top-k queries over encrypted distributed data without having to decrypt at the nodes where they are stored. In addition, SDB-TOPK and SD-TOPK have a powerful filtering algorithm that filters the false positives as much as possible in the nodes, and returns a small set of encrypted data that will be decrypted in the user side. We analyze the security of our system, and propose efficient strategies to enforce it.We validated our solutions through implementation of BuckTop , SDB-TOPK and SD-TOPK, and compared them to baseline approaches over synthetic and real databases. The results show excellent response time compared to baseline approaches. They also show the efficiency of our filtering algorithm that eliminates almost all false positives. Furthermore, our systems yieldsignificant reduction in communication cost between the distributed system nodes when computing the query result.
|
54 |
Efficient Packet-Drop Thwarting and User-Privacy Preserving Protocols for Multi-hop Wireless NetworksMahmoud, Mohamed Mohamed Elsalih Abdelsalam 08 April 2011 (has links)
In multi-hop wireless network (MWN), the mobile nodes relay others’ packets for enabling new applications and enhancing the network deployment and performance. However, the selfish nodes drop the packets because packet relay consumes their resources without benefits, and the malicious nodes drop the packets to launch Denial-of-Service attacks. Packet drop attacks adversely degrade the network fairness and performance in terms of throughput, delay, and packet delivery ratio. Moreover, due to the nature of wireless transmission and multi-hop packet relay, the attackers can analyze the network traffic in undetectable way to learn the users’ locations in number of hops and their communication activities causing a serious threat to the users’ privacy. In this thesis, we propose efficient security protocols for thwarting packet drop attacks and preserving users’ privacy in multi-hop wireless networks.
First, we design a fair and efficient cooperation incentive protocol to stimulate the selfish nodes to relay others’ packets. The source and the destination nodes pay credits (or micropayment) to the intermediate nodes for relaying their packets. In addition to cooperation stimulation, the incentive protocol enforces fairness by rewarding credits to compensate the nodes for the consumed resources in relaying others’ packets. The protocol also discourages launching Resource-Exhaustion attacks by sending bogus packets to exhaust the intermediate nodes’ resources because the nodes pay for relaying their packets.
For fair charging policy, both the source and the destination nodes are charged when the two nodes benefit from the communication. Since micropayment protocols have been originally proposed for web-based applications, we propose a practical payment model specifically designed for MWNs to consider the significant differences between web-based applications and cooperation stimulation. Although the non-repudiation property of the public-key cryptography is essential for securing the incentive protocol, the public-key cryptography requires too complicated computations and has a long signature tag. For efficient implementation, we use the public-key cryptography only for the first packet in a series and use the efficient hashing operations for the next packets, so that the overhead of the packet series converges to that of the hashing operations. Since a trusted party is not involved in the communication sessions, the nodes usually submit undeniable digital receipts (proofs of packet relay) to a centralized trusted party for updating their credit accounts. Instead of submitting large-size payment receipts, the nodes submit brief reports containing the alleged charges and rewards and store undeniable security evidences. The payment of the fair reports can be cleared with almost no processing overhead. For the cheating reports, the evidences are requested to identify and evict the cheating nodes. Since the cheating actions are exceptional, the proposed protocol can significantly reduce the required bandwidth and energy for submitting the payment data and clear the payment with almost no processing overhead while achieving the same security strength as the receipt-based protocols.
Second, the payment reports are processed to extract financial information to reward the cooperative nodes, and contextual information such as the broken links to build up a trust system to measure the nodes’ packet-relay success ratios in terms of trust values. A node’s trust value is degraded whenever it does not relay a packet and improved whenever it does. A node is identified as malicious and excluded from the network once its trust value reaches to a threshold. Using trust system is necessary to keep track of the nodes’ long-term behaviors because the network packets may be dropped normally, e.g., due to mobility, or temporarily, e.g., due to network congestion, but the high frequency of packet drop is an obvious misbehavior. Then, we propose a trust-based and energy-aware routing protocol to route traffics through the highly trusted nodes having sufficient residual energy in order to establish stable routes and thus minimize the probability of route breakage. A node’s trust value is a real and live measurement to the node’s failure probability and mobility level, i.e., the low-mobility nodes having large hardware resources can perform packet relay more efficiently. In this way, the proposed protocol stimulates the nodes not only to cooperate but also to improve their packet-relay success ratio and tell the truth about their residual energy to improve their trust values and thus raise their chances to participate in future routes.
Finally, we propose a privacy-preserving routing and incentive protocol for hybrid ad hoc wireless network. Micropayment is used to stimulate the nodes’ cooperation without submitting payment receipts. We only use the lightweight hashing and symmetric-key-cryptography operations to preserve the users’ privacy. The nodes’ pseudonyms are efficiently computed using hashing operations. Only trusted parties can link these pseudonyms to the real identities for charging and rewarding operations. Moreover, our protocol protects the location privacy of the anonymous source and destination nodes.
Extensive analysis and simulations demonstrate that our protocols can secure the payment and trust calculation, preserve the users’ privacy with acceptable overhead, and precisely identify the malicious and the cheating nodes. Moreover, the simulation and measurement results demonstrate that our routing protocols can significantly improve route stability and thus the packet delivery ratio due to stimulating the selfish nodes’ cooperation, evicting the malicious nodes, and making informed decisions regarding route selection. In addition, the processing and submitting overheads of the payment-reports are incomparable with those of the receipts in the receipt-based incentive protocols. Our protocol also requires incomparable overhead to the signature-based protocols because the lightweight hashing operations dominate the nodes’ operations.
|
55 |
Geometric Methods for Mining Large and Possibly Private DatasetsChen, Keke 07 July 2006 (has links)
With the wide deployment of data intensive Internet applications and continued advances in sensing technology and biotechnology, large multidimensional datasets, possibly containing privacy-conscious information have been emerging. Mining such datasets has become increasingly common in business integration, large-scale scientific data analysis, and national security. The proposed research aims at exploring the geometric properties of the multidimensional datasets utilized in statistical learning and data mining, and providing novel techniques and frameworks for mining very large datasets while protecting the desired data privacy.
The first main contribution of this research is the development of iVIBRATE interactive visualization-based approach for clustering very large datasets. The iVIBRATE framework uniquely addresses the challenges in handling irregularly shaped clusters, domain-specific cluster definition, and cluster-labeling of the data on disk. It consists of the VISTA visual cluster rendering subsystem, and the Adaptive ClusterMap Labeling subsystem.
The second main contribution is the development of ``Best K Plot'(BKPlot) method for determining the critical clustering structures in multidimensional categorical data. The BKPlot method uniquely addresses two challenges in clustering categorical data: How to determine the number of clusters (the best K) and how to identify the existence of significant clustering structures. The method consists of the basic theory, the sample BKPlot theory for large datasets, and the testing method for identifying no-cluster datasets.
The third main contribution of this research is the development of the theory of geometric data perturbation and its application in privacy-preserving data classification involving single party or multiparty collaboration. The key of geometric data perturbation is to find a good randomly generated rotation matrix and an appropriate noise component that provides satisfactory balance between privacy guarantee and data quality, considering possible inference attacks. When geometric perturbation is applied to collaborative multiparty data classification, it is challenging to unify the different geometric perturbations used by different parties. We study three protocols under the data-mining-service oriented framework for unifying the perturbations: 1) the threshold-satisfied voting protocol, 2) the space adaptation protocol, and 3) the space adaptation protocol with a trusted party. The tradeoffs between the privacy guarantee, the model accuracy and the cost are studied for the protocols.
|
56 |
Novel frequent itemset hiding techniques and their evaluation / Σύγχρονες μέθοδοι τεχνικών απόκρυψης συχνών στοιχειοσυνόλων και αξιολόγησή τουςΚαγκλής, Βασίλειος 20 May 2015 (has links)
Advances in data collection and data storage technologies have given way to the establishment of transactional databases among companies and organizations, as they allow enormous volumes of data to be stored efficiently. Most of the times, these vast amounts of data cannot be used as they are. A data processing should first take place, so as to extract the useful knowledge. After the useful knowledge is mined, it can be used in several ways depending on the nature of the data.
Quite often, companies and organizations are willing to share data for the sake of mutual benefit. However, these benefits come with several risks, as problems with privacy might arise, as a result of this sharing. Sensitive data, along with sensitive knowledge inferred from these data, must be protected from unintentional exposure to unauthorized parties. One form of the inferred knowledge is frequent patterns, which are discovered during the process of mining the frequent itemsets from transactional databases. The problem of protecting such patterns is known as the frequent itemset hiding problem.
In this thesis, we review several techniques for protecting sensitive frequent patterns in the form of frequent itemsets. After presenting a wide variety of techniques in detail, we propose a novel approach towards solving this problem. The proposed method is an approach that combines heuristics with linear-programming. We evaluate the proposed method on real datasets. For the evaluation, a number of performance metrics are presented. Finally, we compare the results of the newly proposed method with those of other state-of-the-art approaches. / Η ραγδαία εξέλιξη των τεχνολογιών συλλογής και αποθήκευσης δεδομένων οδήγησε στην καθιέρωση των βάσεων δεδομένων συναλλαγών σε οργανισμούς και εταιρείες, καθώς επιτρέπουν την αποδοτική αποθήκευση τεράστιου όγκου δεδομένων. Τις περισσότερες φορές όμως, αυτός ο τεράστιος όγκος δεδομένων δεν μπορεί να χρησιμοποιηθεί ως έχει. Μια πρώτη επεξεργασία των δεδομένων πρέπει να γίνει, ώστε να εξαχθεί η χρήσιμη πληροφορία. Ανάλογα με τη φύση των δεδομένων, αυτή η χρήσιμη πληροφορία μπορεί να χρησιμοποιηθεί στη συνέχεια αναλόγως.
Αρκετά συχνά, οι εταιρείες και οι οργανισμοί είναι πρόθυμοι να μοιραστούν τα δεδομένα μεταξύ τους με στόχο το κοινό τους όφελος. Ωστόσο, αυτά τα οφέλη συνοδεύονται με διάφορους κινδύνους, καθώς ενδέχεται να προκύψουν προβλήματα ιδιωτικής φύσης, ως αποτέλεσμα αυτής της κοινής χρήσης των δεδομένων. Ευαίσθητα δεδομένα, μαζί με την ευαίσθητη γνώση που μπορεί να προκύψει από αυτά, πρέπει να προστατευτούν από την ακούσια έκθεση σε μη εξουσιοδοτημένους τρίτους. Μια μορφή της εξαχθείσας γνώσης είναι τα συχνά μοτίβα, που ανακαλύφθηκαν κατά την εξόρυξη συχνών στοιχειοσυνόλων από βάσεις δεδομένων συναλλαγών. Το πρόβλημα της προστασίας συχνών μοτίβων τέτοιας μορφής είναι γνωστό ως το πρόβλημα απόκρυψης συχνών στοιχειοσυνόλων.
Στην παρούσα διπλωματική εργασία, εξετάζουμε διάφορες τεχνικές για την προστασία ευαίσθητων συχνών μοτίβων, υπό τη μορφή συχνών στοιχειοσυνόλων. Αφού παρουσιάσουμε λεπτομερώς μια ευρεία ποικιλία τεχνικών απόκρυψης, προτείνουμε μια νέα προσέγγιση για την επίλυση αυτού του προβλήματος. Η προτεινόμενη μέθοδος είναι μια προσέγγιση που συνδυάζει ευρετικές μεθόδους με γραμμικό προγραμματισμό. Για την αξιολόγηση της προτεινόμενης μεθόδου χρησιμοποιούμε πραγματικά δεδομένα. Για τον σκοπό αυτό, παρουσιάζουμε επίσης και μια σειρά από μετρικές αξιολόγησης. Τέλος, συγκρίνουμε τα αποτελέσματα της νέας προτεινόμενης μεθόδου με άλλες κορυφαίες προσεγγίσεις.
|
57 |
CONTEXT AWARE PRIVACY PRESERVING CLUSTERING AND CLASSIFICATIONThapa, Nirmal 01 January 2013 (has links)
Data are valuable assets to any organizations or individuals. Data are sources of useful information which is a big part of decision making. All sectors have potential to benefit from having information. Commerce, health, and research are some of the fields that have benefited from data. On the other hand, the availability of the data makes it easy for anyone to exploit the data, which in many cases are private confidential data. It is necessary to preserve the confidentiality of the data. We study two categories of privacy: Data Value Hiding and Data Pattern Hiding. Privacy is a huge concern but equally important is the concern of data utility. Data should avoid privacy breach yet be usable. Although these two objectives are contradictory and achieving both at the same time is challenging, having knowledge of the purpose and the manner in which it will be utilized helps. In this research, we focus on some particular situations for clustering and classification problems and strive to balance the utility and privacy of the data.
In the first part of this dissertation, we propose Nonnegative Matrix Factorization (NMF) based techniques that accommodate constraints defined explicitly into the update rules. These constraints determine how the factorization takes place leading to the favorable results. These methods are designed to make alterations on the matrices such that user-specified cluster properties are introduced. These methods can be used to preserve data value as well as data pattern. As NMF and K-means are proven to be equivalent, NMF is an ideal choice for pattern hiding for clustering problems. In addition to the NMF based methods, we propose methods that take into account the data structures and the attribute properties for the classification problems. We separate the work into two different parts: linear classifiers and nonlinear classifiers. We propose two different solutions based on the classifiers. We study the effect of distortion on the utility of data.
We propose three distortion measurement metrics which demonstrate better characteristics than the traditional metrics. The effectiveness of the measures is examined on different benchmark datasets. The result shows that the methods have the desirable properties such as invariance to translation, rotation, and scaling.
|
58 |
Efficient Packet-Drop Thwarting and User-Privacy Preserving Protocols for Multi-hop Wireless NetworksMahmoud, Mohamed Mohamed Elsalih Abdelsalam 08 April 2011 (has links)
In multi-hop wireless network (MWN), the mobile nodes relay others’ packets for enabling new applications and enhancing the network deployment and performance. However, the selfish nodes drop the packets because packet relay consumes their resources without benefits, and the malicious nodes drop the packets to launch Denial-of-Service attacks. Packet drop attacks adversely degrade the network fairness and performance in terms of throughput, delay, and packet delivery ratio. Moreover, due to the nature of wireless transmission and multi-hop packet relay, the attackers can analyze the network traffic in undetectable way to learn the users’ locations in number of hops and their communication activities causing a serious threat to the users’ privacy. In this thesis, we propose efficient security protocols for thwarting packet drop attacks and preserving users’ privacy in multi-hop wireless networks.
First, we design a fair and efficient cooperation incentive protocol to stimulate the selfish nodes to relay others’ packets. The source and the destination nodes pay credits (or micropayment) to the intermediate nodes for relaying their packets. In addition to cooperation stimulation, the incentive protocol enforces fairness by rewarding credits to compensate the nodes for the consumed resources in relaying others’ packets. The protocol also discourages launching Resource-Exhaustion attacks by sending bogus packets to exhaust the intermediate nodes’ resources because the nodes pay for relaying their packets.
For fair charging policy, both the source and the destination nodes are charged when the two nodes benefit from the communication. Since micropayment protocols have been originally proposed for web-based applications, we propose a practical payment model specifically designed for MWNs to consider the significant differences between web-based applications and cooperation stimulation. Although the non-repudiation property of the public-key cryptography is essential for securing the incentive protocol, the public-key cryptography requires too complicated computations and has a long signature tag. For efficient implementation, we use the public-key cryptography only for the first packet in a series and use the efficient hashing operations for the next packets, so that the overhead of the packet series converges to that of the hashing operations. Since a trusted party is not involved in the communication sessions, the nodes usually submit undeniable digital receipts (proofs of packet relay) to a centralized trusted party for updating their credit accounts. Instead of submitting large-size payment receipts, the nodes submit brief reports containing the alleged charges and rewards and store undeniable security evidences. The payment of the fair reports can be cleared with almost no processing overhead. For the cheating reports, the evidences are requested to identify and evict the cheating nodes. Since the cheating actions are exceptional, the proposed protocol can significantly reduce the required bandwidth and energy for submitting the payment data and clear the payment with almost no processing overhead while achieving the same security strength as the receipt-based protocols.
Second, the payment reports are processed to extract financial information to reward the cooperative nodes, and contextual information such as the broken links to build up a trust system to measure the nodes’ packet-relay success ratios in terms of trust values. A node’s trust value is degraded whenever it does not relay a packet and improved whenever it does. A node is identified as malicious and excluded from the network once its trust value reaches to a threshold. Using trust system is necessary to keep track of the nodes’ long-term behaviors because the network packets may be dropped normally, e.g., due to mobility, or temporarily, e.g., due to network congestion, but the high frequency of packet drop is an obvious misbehavior. Then, we propose a trust-based and energy-aware routing protocol to route traffics through the highly trusted nodes having sufficient residual energy in order to establish stable routes and thus minimize the probability of route breakage. A node’s trust value is a real and live measurement to the node’s failure probability and mobility level, i.e., the low-mobility nodes having large hardware resources can perform packet relay more efficiently. In this way, the proposed protocol stimulates the nodes not only to cooperate but also to improve their packet-relay success ratio and tell the truth about their residual energy to improve their trust values and thus raise their chances to participate in future routes.
Finally, we propose a privacy-preserving routing and incentive protocol for hybrid ad hoc wireless network. Micropayment is used to stimulate the nodes’ cooperation without submitting payment receipts. We only use the lightweight hashing and symmetric-key-cryptography operations to preserve the users’ privacy. The nodes’ pseudonyms are efficiently computed using hashing operations. Only trusted parties can link these pseudonyms to the real identities for charging and rewarding operations. Moreover, our protocol protects the location privacy of the anonymous source and destination nodes.
Extensive analysis and simulations demonstrate that our protocols can secure the payment and trust calculation, preserve the users’ privacy with acceptable overhead, and precisely identify the malicious and the cheating nodes. Moreover, the simulation and measurement results demonstrate that our routing protocols can significantly improve route stability and thus the packet delivery ratio due to stimulating the selfish nodes’ cooperation, evicting the malicious nodes, and making informed decisions regarding route selection. In addition, the processing and submitting overheads of the payment-reports are incomparable with those of the receipts in the receipt-based incentive protocols. Our protocol also requires incomparable overhead to the signature-based protocols because the lightweight hashing operations dominate the nodes’ operations.
|
59 |
Kryptografické protokoly pro ochranu soukromí / Cryptographic protocols for privacy protectionHanzlíček, Martin January 2018 (has links)
This work focuses on cryptographic protocol with privacy protection. The work solves the question of the elliptic curves and use in cryptography in conjunction with authentication protocols. The outputs of the work are two applications. The first application serves as a user and will replace the ID card. The second application is authentication and serves as a user authentication terminal. Both applications are designed for the Android operating system. Applications are used to select user attributes, confirm registration, user verification and show the result of verification.
|
60 |
Interactive mapping specification and repairing in the presence of policy views / Spécification et réparation interactive de mappings en présence de polices de sécuritéComignani, Ugo 19 September 2019 (has links)
La migration de données entre des sources aux schémas hétérogènes est un domaine en pleine croissance avec l'augmentation de la quantité de données en accès libre, et le regroupement des données à des fins d'apprentissage automatisé et de fouilles. Cependant, la description du processus de transformation des données d'une instance source vers une instance définie sur un schéma différent est un processus complexe même pour un utilisateur expert dans ce domaine. Cette thèse aborde le problème de la définition de mapping par un utilisateur non expert dans le domaine de la migration de données, ainsi que la vérification du respect par ce mapping des contraintes d'accès ayant été définies sur les données sources. Pour cela, dans un premier temps nous proposons un système dans lequel l'utilisateur fournit un ensemble de petits exemples de ses données, et est amené à répondre à des questions booléennes simples afin de générer un mapping correspondant à ses besoins. Dans un second temps, nous proposons un système permettant de réécrire le mapping produit de manière à assurer qu'il respecte un ensemble de vues de contrôle d'accès définis sur le schéma source du mapping. Plus précisément, le premier grand axe de cette thèse est la formalisation du problème de la définition interactive de mappings, ainsi que la description d'un cadre formel pour la résolution de celui-ci. Cette approche formelle pour la résolution du problème de définition interactive de mappings est accompagnée de preuves de bonnes propriétés. A la suite de cela, basés sur le cadre formel défini précédemment, nous proposons des algorithmes permettant de résoudre efficacement ce problème en pratique. Ces algorithmes visent à réduire le nombre de questions auxquelles l'utilisateur doit répondre afin d'obtenir un mapping correspondant à ces besoins. Pour cela, les mappings possibles sont ordonnés dans des structures de treillis imbriqués, afin de permettre un élagage efficace de l'espace des mappings à explorer. Nous proposons également une extension de cette approche à l'utilisation de contraintes d'intégrité afin d'améliorer l’efficacité de l'élagage. Le second axe majeur vise à proposer un processus de réécriture de mapping qui, étant donné un ensemble de vues de contrôle d'accès de référence, permet d'assurer que le mapping réécrit ne laisse l'accès à aucune information n'étant pas accessible via les vues de contrôle d'accès. Pour cela, nous définissons un protocole de contrôle d'accès permettant de visualiser les informations accessibles ou non à travers un ensemble de vues de contrôle d'accès. Ensuite, nous décrivons un ensemble d'algorithmes permettant la réécriture d'un mapping en un mapping sûr vis-à-vis d'un ensemble de vues de contrôle d'accès. Comme précédemment, cette approche est complétée de preuves de bonnes propriétés. Afin de réduire le nombre d'interactions nécessaires avec l'utilisateur lors de la réécriture d'un mapping, une approche permettant l'apprentissage des préférences de l'utilisateur est proposée, cela afin de permettre le choix entre un processus interactif ou automatique. L'ensemble des algorithmes décrit dans cette thèse ont fait l'objet d'un prototypage et les expériences réalisées sur ceux-ci sont présentées dans cette thèse / Data exchange between sources over heterogeneous schemas is an ever-growing field of study with the increased availability of data, oftentimes available in open access, and the pooling of such data for data mining or learning purposes. However, the description of the data exchange process from a source to a target instance defined over a different schema is a cumbersome task, even for users acquainted with data exchange. In this thesis, we address the problem of allowing a non-expert user to spec- ify a source-to-target mapping, and the problem of ensuring that the specified mapping does not leak information forbidden by the security policies defined over the source. To do so, we first provide an interactive process in which users provide small examples of their data, and answer simple boolean questions in order to specify their intended mapping. Then, we provide another process to rewrite this mapping in order to ensure its safety with respect to the source policy views. As such, the first main contribution of this thesis is to provide a formal definition of the problem of interactive mapping specification, as well as a formal resolution process for which desirable properties are proved. Then, based on this formal resolution process, practical algorithms are provided. The approach behind these algorithms aims at reducing the number of boolean questions users have to answers by making use of quasi-lattice structures to order the set of possible mappings to explore, allowing an efficient pruning of the space of explored mappings. In order to improve this pruning, an extension of this approach to the use of integrity constraints is also provided. The second main contribution is a repairing process allowing to ensure that a mapping is “safe” with respect to a set of policy views defined on its source schema, i.e., that it does not leak sensitive information. A privacy-preservation protocol is provided to visualize the information leaks of a mapping, as well as a process to rewrite an input mapping into a safe one with respect to a set of policy views. As in the first contribution, this process comes with proofs of desirable properties. In order to reduce the number of interactions needed with the user, the interactive part of the repairing process is also enriched with the possibility of learning which rewriting is preferred by users, in order to obtain a completely automatic process. Last but not least, we present extensive experiments over the open source prototypes built from two contributions of this thesis
|
Page generated in 0.0747 seconds