Spelling suggestions: "subject:"privacy preserving"" "subject:"privacy reserving""
21 |
Conditional Privacy-Preserving Authentication Protocols for Vehicular Ad Hoc NetworksLi, Jiliang 17 May 2019 (has links)
No description available.
|
22 |
Extending Complex Event Processing for Advanced ApplicationsWang, Di 30 April 2013 (has links)
Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology.
|
23 |
A Study on Improving Efficiency of Privacy-Preserving Utility MiningWong, Jia-Wei 11 September 2012 (has links)
Utility mining algorithms have recently been proposed to discover high utility itemsets from a quantitative database. Factors such as profits or prices are concerned in measuring the utility values of purchased items for revealing more useful knowledge to managers. Nearly all the existing algorithms are performed in a batch way to extract high utility itemsets. In real-world applications, transactions may, however, be inserted, deleted or modified in a database. The batch mining procedure requires more computational time for rescanning the whole updated database to maintain the up-to-date knowledge. In the first part of this thesis, two algorithms for data insertion and data deletion are respectively proposed for efficiently updating the discovered high utility itemsets based on pre-large concepts. The proposed algorithms firstly partition itemsets into three parts with nine cases according to whether they are large (high), pre-large or small transaction-weighted utilization in the original database. Each part is then performed by its own procedure to maintain and update the discovered high utility itemsets. Based on the pre-large concepts, the original database only need to be rescanned for much fewer itemsets in the maintenance process of high utility itemsets.
Besides, the risk of privacy threats usually exists in the process of data collection and data dissemination. Sensitive or personal information are required to be kept as private information before they are shared or published. Privacy-preserving utility mining (PPUM) has thus become an important issue in recent years. In the second part of this thesis, two evolutionary privacy-preserving utility mining algorithms to hide sensitive high utility itemsets in data sanitization for inserting dummy transactions and deleting transactions are respectively proposed. The two evolutionary privacy-preserving utility mining algorithms find appropriate transactions for insertion and deletion in the data-sanitization process. They adopt a flexible evaluation function with three factors. Different weights are assigned to the three factors depending on users¡¦ preference. The maintenance algorithms proposed in the first part of this thesis are also used in the GA-based approach to reduce the cost of rescanning databases, thus speeding up the evaluation process of chromosomes. Experiments are conducted as well to evaluate the performance of the proposed algorithms.
|
24 |
Uma abordagem distribuÃda para preservaÃÃo de privacidade na publicaÃÃo de dados de trajetÃria / A distributed approach for privacy preservation in the publication of trajectory dataFelipe Timbà Brito 17 December 2015 (has links)
AvanÃos em tÃcnicas de computaÃÃo mÃvel aliados à difusÃo de serviÃos baseados em localizaÃÃo tÃm gerado uma grande quantidade de dados de trajetÃria. Tais dados podem ser utilizados para diversas finalidades, tais como anÃlise de fluxo de trÃfego, planejamento de infraestrutura, entendimento do comportamento humano, etc. No entanto, a publicaÃÃo destes dados pode levar a sÃrios riscos de violaÃÃo de privacidade. Semi-identificadores sÃo pontos de trajetÃria que podem ser combinados com informaÃÃes externas e utilizados para identificar indivÃduos associados à sua trajetÃria. Por esse motivo, analisando semi-identificadores, um usuÃrio malicioso pode ser capaz de restaurar trajetÃrias anonimizadas de indivÃduos por meio de aplicaÃÃes de redes sociais baseadas em localizaÃÃo, por exemplo. Muitas das abordagens jà existentes envolvendo anonimizaÃÃo de dados foram propostas para ambientes de computaÃÃo centralizados, assim elas geralmente apresentam um baixo desempenho para anonimizar grandes conjuntos de dados de trajetÃria. Neste trabalho propomos uma estratÃgia distribuÃda e eficiente que adota o modelo de privacidade km-anonimato e utiliza o escalÃvel paradigma MapReduce, o qual permite encontrar semi-identificadores em um grande volume de dados. NÃs tambÃm apresentamos uma tÃcnica que minimiza a perda de informaÃÃo selecionando localizaÃÃes chaves a serem removidas a partir do conjunto de semi-identificadores. Resultados de avaliaÃÃo experimental demonstram que nossa soluÃÃo de anonimizaÃÃo à mais escalÃvel e eficiente que trabalhos jà existentes na literatura. / Advancements in mobile computing techniques along with the pervasiveness of location-based services have generated a great amount of trajectory data. These data can be used for various data analysis purposes such as traffic flow analysis, infrastructure planning, understanding of human behavior, etc. However, publishing this amount of trajectory data may lead to serious risks of privacy breach. Quasi-identifiers are trajectory points that can be linked to external information and be used to identify individuals associated with trajectories. Therefore, by analyzing quasi-identifiers, a malicious user may be able to trace anonymous trajectories back to individuals with the aid of location-aware social networking applications, for example. Most existing trajectory data anonymization approaches were proposed for centralized computing environments, so they usually present poor performance to anonymize large trajectory data sets. In this work we propose a distributed and efficient strategy that adopts the $k^m$-anonymity privacy model and uses the scalable MapReduce paradigm, which allows finding quasi-identifiers in larger amount of data. We also present a technique to minimize the loss of information by selecting key locations from the quasi-identifiers to be suppressed. Experimental evaluation results demonstrate that our proposed approach for trajectory data anonymization is more scalable and efficient than existing works in the literature.
|
25 |
Privacy Preserving Implementation in the E-health SystemChen, Zongzhe January 2013 (has links)
E-health systems are widely used in today’s world, and have a stillbrighter future with the rapid development of smart phones. A fewyears ago, e-health system could only be carried out on computers.But recently, people are using this as a phone application, so thatthey can get information at any time and anywhere. In addition,some smart phones can already measure heart rate and bloodpressure, for example, ‚Instant Heart Rate‛ and ‚Blood PressureMonitor‛. By using these kinds of applications, users can easilymeasure their health data and store them in their mobile phones.However, the problem of privacy has been attracting people’sattention. After uploading their data to the database, users do havethe right to protect their privacy. For instance, even the doctor hasthe authority to obtain the health record; the user's name can behidden, so that the doctor does not know who the owner of this datais. This problem also includes anonymization, pseudonymity,unlinkability, unobservability and many other aspects.In this thesis work, an android application is proposed to solve thisproblem. Users can set their own rules, and all data requests shouldbe dealt with by calling the rules. In addition, a module in the serveris to be developed to carry out the whole process of privacypreserving and the users’ data should be stored in the database.A standard for users to set rules is determined, which is bothdynamical and flexible. The application realizes some additional rulechecking functions to determine whether users have set a valid rule.Privacy rules can be created, deleted, or uploaded. In addition, userscan update their health record and upload it to the database. Theserver will call different protocols to deal with different requests, andthe data which obtains the requests is responded to by calling theusers’ own privacy rules.
|
26 |
Towards a Privacy Preserving Framework for Publishing Longitudinal DataSehatkar, Morvarid January 2014 (has links)
Recent advances in information technology have enabled public organizations and corporations to collect and store huge amounts of individuals' data in data repositories. Such data are powerful sources of information about an individual's life such as interests, activities, and finances. Corporations can employ data mining and knowledge discovery techniques to extract useful knowledge and interesting patterns from large repositories of individuals' data. The extracted knowledge can be exploited to improve strategic decision making, enhance business performance, and improve services. However, person-specific data often contain sensitive information about individuals and publishing such data poses potential privacy risks. To deal with these privacy issues, data must be anonymized so that no sensitive information about individuals can be disclosed from published data while distortion is minimized to ensure usefulness of data in practice. In this thesis, we address privacy concerns in publishing longitudinal data. A data set is longitudinal if it contains information of the same observation or event about individuals collected at several points in time. For instance, the data set of multiple visits of patients of a hospital over a period of time is longitudinal. Due to temporal correlations among the events of each record, potential background knowledge of adversaries about an individual in the context of longitudinal data has specific characteristics. None of the previous anonymization techniques can effectively protect longitudinal data against an adversary with such knowledge. In this thesis we identify the potential privacy threats on longitudinal data and propose a novel framework of anonymization algorithms in a way that protects individuals' privacy against both identity disclosure and attribute disclosure, and preserves data utility. Particularly, we propose two privacy models: (K,C)^P -privacy and (K,C)-privacy, and for each of these models we propose efficient algorithms for anonymizing longitudinal data. An extensive experimental study demonstrates that our proposed framework can effectively and efficiently anonymize longitudinal data.
|
27 |
A Framework for anonymous background data delivery and feedbackTimchenko, Maxim 28 October 2015 (has links)
The current state of the industry’s methods of collecting background data reflecting diagnostic and usage information are often opaque and require users to place a lot of trust in the entity receiving the data. For vendors, having a centralized database of potentially sensitive data is a privacy protection headache and a potential liability should a breach of that database occur. Unfortunately, high profile privacy failures are not uncommon, so many individuals and companies are understandably skeptical and choose not to contribute any information. It is a shame, since the data could be used for improving reliability, or getting stronger security, or for valuable academic research into real-world usage patterns.
We propose, implement and evaluate a framework for non-realtime anonymous data collection, aggregation for analysis, and feedback. Departing from the usual “trusted core” approach, we aim to maintain reporters’ anonymity even if the centralized part of the system is compromised. We design a peer-to-peer mix network and its protocol that are tuned to the properties of background diagnostic traffic. Our system delivers data to a centralized repository while maintaining (i) source anonymity, (ii) privacy in transit, and (iii) the ability to provide analysis feedback back to the source. By removing the core’s ability to identify the source of data and to track users over time, we drastically reduce its attractiveness as a potential attack target and allow vendors to make concrete and verifiable privacy and anonymity claims.
|
28 |
Security and privacy aspects of mobile applications for post-surgical careMeng, Xianrui 22 January 2016 (has links)
Mobile technologies have the potential to improve patient monitoring, medical decision making and in general the efficiency and quality of health delivery. They also pose new security and privacy challenges. The objectives of this work are to (i) Explore and define security and privacy requirements on the example of a post-surgical care application, and (ii) Develop and test a pilot implementation Post-Surgical Care Studies of surgical out- comes indicate that timely treatment of the most common complications in compliance with established post-surgical regiments greatly improve success rates. The goal of our pilot application is to enable physician to optimally synthesize and apply patient directed best medical practices to prevent post-operative complications in an individualized patient/procedure specific fashion. We propose a framework for a secure protocol to enable doctors to check most common complications for their patient during in-hospital post- surgical care. We also implemented our construction and cryptographic protocols as an iPhone application on the iOS using existing cryptographic services and libraries.
|
29 |
Adaptable Privacy-preserving ModelBrown, Emily Elizabeth 01 January 2019 (has links)
Current data privacy-preservation models lack the ability to aid data decision makers in processing datasets for publication. The proposed algorithm allows data processors to simply provide a dataset and state their criteria to recommend an xk-anonymity approach. Additionally, the algorithm can be tailored to a preference and gives the precision range and maximum data loss associated with the recommended approach. This dissertation report outlined the research’s goal, what barriers were overcome, and the limitations of the work’s scope. It highlighted the results from each experiment conducted and how it influenced the creation of the end adaptable algorithm. The xk-anonymity model built upon two foundational privacy models, the k-anonymity and l-diversity models. Overall, this study had many takeaways on data and its power in a dataset.
|
30 |
Optimization of the Mainzelliste software for fast privacy-preserving record linkageRohde, Florens, Franke, Martin, Sehili, Ziad, Lablans, Martin, Rahm, Erhard 11 February 2022 (has links)
Background: Data analysis for biomedical research often requires a record linkage step to identify records from multiple data sources referring to the same person. Due to the lack of unique personal identifiers across these sources, record linkage relies on the similarity of personal data such as first and last names or birth dates. However, the exchange of such identifying data with a third party, as is the case in record linkage, is generally subject to strict privacy requirements. This problem is addressed by privacy-preserving record linkage (PPRL) and pseudonymization services. Mainzelliste is an open-source record linkage and pseudonymization service used to carry out PPRL processes in real-world use cases.
Methods: We evaluate the linkage quality and performance of the linkage process using several real and near-real datasets with different properties w.r.t. size and error-rate of matching records. We conduct a comparison between (plaintext) record linkage and PPRL based on encoded records (Bloom filters). Furthermore, since the Mainzelliste software offers no blocking mechanism, we extend it by phonetic blocking as well as novel blocking schemes based on locality-sensitive hashing (LSH) to improve runtime for both standard and privacy-preserving record linkage.
Results: The Mainzelliste achieves high linkage quality for PPRL using field-level Bloom filters due to the use of an error-tolerant matching algorithm that can handle variances in names, in particular missing or transposed name compounds. However, due to the absence of blocking, the runtimes are unacceptable for real use cases with larger datasets. The newly implemented blocking approaches improve runtimes by orders of magnitude while retaining high linkage quality.
Conclusion: We conduct the first comprehensive evaluation of the record linkage facilities of the Mainzelliste software and extend it with blocking methods to improve its runtime. We observed a very high linkage quality for both plaintext as well as encoded data even in the presence of errors. The provided blocking methods provide order of magnitude improvements regarding runtime performance thus facilitating the use in research projects with large datasets and many participants.
|
Page generated in 0.0904 seconds