• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 33
  • 25
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Privacy-Preserving Multi-Quality Charging in V2G network

He, Miao 05 September 2014 (has links)
Vehicle-to-grid (V2G) network, which provides electricity charging service to the electric vehicles (EVs), is an essential part of the smart grid (SG). It can not only effectively reduce the greenhouse gas emission but also significantly enhance the efficiency of the power grid. Due to the limitation of the local electricity resource, the quality of charging service can be hardly guaranteed for every EV in V2G network. To this end, the multi-quality charging is introduced to provide quality-guaranteed service (QGS) to the qualified EVs and best effort service (BES) to the other EVs. To perform the multi-quality charging, the evaluation on the EV's attributes is necessary to determine which level of charging service can be offered to the EV. However, the EV owner's privacy such as real identity, lifestyle, location, and sensitive information in the attributes may be violated during the evaluation and authentication. In this thesis, a privacy-preserving multi-quality charging (PMQC) scheme for V2G network is proposed to evaluate the EV's attributes, authenticate its service eligibility and generate its bill without revealing the EV's private information. Specifically, by adopting ciphertext-policy attribute based encryption (CP-ABE), the EV can be evaluated to have proper charging service without disclosing its attribute privacy. By utilizing group signature, the EV's real identity is kept confidential during the authentication and the bill generation. By hiding the EV's real identity, the EV owner's lifestyle privacy and location privacy are also preserved. Security analysis demonstrates that PMQC can achieve the EV's privacy preservation, fine-grained access control on the EVs for QGS, traceability of the EV's real identity and secure revocation on the EV's service eligibility. Performance evaluation result shows that PMQC can achieve higher efficiency in authentication and verification compared with other schemes in terms of computation overhead. Based on PMQC, the EV's computation overhead and storage overhead can be further reduced in the extended privacy-preserving multi-quality charging (ePMQC) scheme.
2

Towards Privacy Preserving of Forensic DNA Databases

Liu, Sanmin 2011 December 1900 (has links)
Protecting privacy of individuals is critical for forensic genetics. In a kinship/identity testing, related DNA profiles between user's query and the DNA database need to be extracted. However, unrelated profiles cannot be revealed to each other. The challenge is today's DNA database usually contains millions of DNA profiles, which is too big to perform privacy-preserving query with current cryptosystem directly. In this thesis, we propose a scalable system to support privacy-preserving query in DNA Database. A two-phase strategy is designed: the first is a Short Tandem Repeat index tree for quick fetching candidate profiles from disk. It groups loci of DNA profiles by matching probability, so as to reduce I/O cost required to find a particular profile. The second is an Elliptic Curve Cryptosystem based privacy-preserving matching engine, which performs match between candidates and user's sample. In particular, a privacy-preserving DNA profile matching algorithm is designed, which achieves O(n) computing time and communication cost. Experimental results show that our system performs well at query latency, query hit rate, and communication cost. For a database of one billion profiles, it takes 80 seconds to return results to the user.
3

Privacy Preserving Billing Protocol for Smart Grid

Artan, William 13 July 2012 (has links)
Smart grid is an advanced electrical grid equipped with communication capability which is utilized to improve the efficiency, reliability, and sustainability of electricity services. Countries within Europe, North America and East Asia are undergoing a transformation from an antiquated infrastructure to the smart grid. However, some of problems arise due to the security and privacy issues of smart grid. Since smart meters and a grid operator can interact through its communication channel, there is a possibility that a hacker can hack into the system to steal information or even cut off the electricity service. Moreover, people are protesting and refusing to use smart meter since it enables the grid operator to perform frequent meter reading which unveils the customers¡¦ private energy usage information that could be abused. To cope with the privacy issue, we proposed an enhanced version of aggregation protocol from Garcia-Jacobs protocol where our protocol protects not only customers¡¦ energy consumption information but also the consumption information of a neighborhood. Furthermore, we proposed a novel privacy preserving billing protocol based on Priced Oblivious Transfer (POT) protocol which guarantees the grid operator to get the correct amount of money without knowing the individual energy consumption of the customers. Additionally, we also implement our proposed protocols.
4

CUDIA : a probabilistic cross-level imputation framework using individual auxiliary information / Probabilistic cross-level imputation framework using individual auxiliary information

Park, Yubin 17 February 2012 (has links)
In healthcare-related studies, individual patient or hospital data are not often publicly available due to privacy restrictions, legal issues or reporting norms. However, such measures may be provided at a higher or more aggregated level, such as state-level, county-level summaries or averages over health zones such as Hospital Referral Regions (HRR) or Hospital Service Areas (HSA). Such levels constitute partitions over the underlying individual level data, which may not match the groupings that would have been obtained if one clustered the data based on individual-level attributes. Moreover, treating aggregated values as representatives for the individuals can result in the ecological fallacy. How can one run data mining procedures on such data where different variables are available at different levels of aggregation or granularity? In this thesis, we seek a better utilization of variably aggregated datasets, which are possibly assembled from different sources. We propose a novel "cross-level" imputation technique that models the generative process of such datasets using a Bayesian directed graphical model. The imputation is based on the underlying data distribution and is shown to be unbiased. This imputation can be further utilized in a subsequent predictive modeling, yielding improved accuracies. The experimental results using a simulated dataset and the Behavioral Risk Factor Surveillance System (BRFSS) dataset are provided to illustrate the generality and capabilities of the proposed framework. / text
5

Secure Cloud Computing for Solving Large-Scale Linear Systems of Equations

Chen, Xuhui 11 December 2015 (has links)
Solving large-scale linear systems of equations (LSEs) is one of the most common and fundamental problems in big data. But such problems are often too expensive to solve for resource-limited users. Cloud computing has been proposed as an efficient and costeffective way of solving such tasks. Nevertheless, one critical concern in cloud computing is data privacy. Many previous works on secure outsourcing of LSEs have high computational complexity and share a common serious problem, i.e., a huge number of external memory I/O operations, which may render those outsourcing schemes impractical. We develop a practical secure outsourcing algorithm for solving large-scale LSEs, which has both low computational complexity and low memory I/O complexity and can protect clients privacy well. We implement our algorithm on a real-world cloud server and a laptop. We find that the proposed algorithm offers significant time savings for the client (up to 65%) compared to previous algorithms.
6

Oblivious Handshakes and Sharing of Secrets of Privacy-Preserving Matching and Authentication Protocols

Duan, Pu 2011 May 1900 (has links)
The objective of this research is focused on two of the most important privacy-preserving techniques: privacy-preserving element matching protocols and privacy-preserving credential authentication protocols, where an element represents the information generated by users themselves and a credential represents a group membership assigned from an independent central authority (CA). The former is also known as private set intersection (PSI) protocol and the latter is also known as secret handshake (SH) protocol. In this dissertation, I present a general framework for design of efficient and secure PSI and SH protocols based on similar message exchange and computing procedures to confirm “commonality” of their exchanged information, while protecting the information from each other when the commonalty test fails. I propose to use the homomorphic randomization function (HRF) to meet the privacy-preserving requirements, i.e., common element/credential can be computed efficiently based on homomorphism of the function and uncommon element/credential are difficult to derive because of the randomization of the same function. Based on the general framework two new PSI protocols with linear computing and communication cost are proposed. The first protocol uses full homomorphic randomization function as the cryptographic basis and the second one uses partial homomorphic randomization function. Both of them achieve element confidentiality and private set intersection. A new SH protocol is also designed based on the framework, which achieves unlinkability with a reusable pair of credential and pseudonym and least number of bilinear mapping operations. I also propose to interlock the proposed PSI protocols and SH protocol to design new protocols with new security properties. When a PSI protocol is executed first and the matched elements are associated with the credentials in a following SH protocol, authenticity is guaranteed on matched elements. When a SH protocol is executed first and the verified credentials is used in a following PSI protocol, detection resistance and impersonation attack resistance are guaranteed on matching elements. The proposed PSI and SH protocols are implemented to provide privacy-preserving inquiry matching service (PPIM) for social networking applications and privacy-preserving correlation service (PAC) of network security alerts. PPIM allows online social consumers to find partners with matched inquiries and verified group memberships without exposing any information to unmatched parties. PAC allows independent network alert sources to find the common alerts without unveiling their local network information to each other.
7

Achieving privacy-preserving distributed statistical computation

Liu, Meng-Chang January 2012 (has links)
The growth of the Internet has opened up tremendous opportunities for cooperative computations where the results depend on the private data inputs of distributed participating parties. In most cases, such computations are performed by multiple mutually untrusting parties. This has led the research community into studying methods for performing computation across the Internet securely and efficiently. This thesis investigates security methods in the search for an optimum solution to privacy- preserving distributed statistical computation problems. For this purpose, the nonparametric sign test algorithm is chosen as a case for study to demonstrate our research methodology. Two privacy-preserving protocol suites using data perturbation techniques and cryptographic primitives are designed. The first protocol suite, i.e. the P22NSTP, is based on five novel data perturbation building blocks, i.e. the random probability density function generation protocol (RpdfGP), the data obscuring protocol (DOP), the secure two-party comparison protocol (STCP), the data extraction protocol (DEP) and the permutation reverse protocol (PRP). This protocol suite enables two parties to efficiently and securely perform the sign test computation without the use of a third party. The second protocol suite, i.e. the P22NSTC, uses an additively homomorphic encryption scheme and two novel building blocks, i.e. the data separation protocol (DSP) and data randomization protocol (DRP). With some assistance from an on-line STTP, this protocol suite provides an alternative solution for two parties to achieve a secure privacy-preserving nonparametric sign test computation. These two protocol suites have been implemented using MATLAB software. Their implementations are evaluated and compared against the sign test computation algorithm on an ideal trusted third party model (TTP-NST) in terms of security, computation and communication overheads and protocol execution times. By managing the level of noise data item addition, the P22NSTP can achieve specific levels of privacy protection to fit particular computation scenarios. Alternatively, the P22NSTC provides a more secure solution than the P22NSTP by employing an on-line STTP. The level of privacy protection relies on the use of an additively homomorphic encryption scheme, DSP and DRP. A four-phase privacy-preserving transformation methodology has also been demonstrated; it includes data privacy definition, statistical algorithm decomposition, solution design and solution implementation.
8

Implementing Differential Privacy for Privacy Preserving Trajectory Data Publication in Large-Scale Wireless Networks

Stroud, Caleb Zachary 14 August 2018 (has links)
Wireless networks collect vast amounts of log data concerning usage of the network. This data aids in informing operational needs related to performance, maintenance, etc., but it is also useful for outside researchers in analyzing network operation and user trends. Releasing such information to these outside researchers poses a threat to privacy of users. The dueling need for utility and privacy must be addressed. This thesis studies the concept of differential privacy for fulfillment of these goals of releasing high utility data to researchers while maintaining user privacy. The focus is specifically on physical user trajectories in authentication manager log data since this is a rich type of data that is useful for trend analysis. Authentication manager log data is produced when devices connect to physical access points (APs) and trajectories are sequences of these spatiotemporal connections from one AP to another for the same device. The fulfillment of this goal is pursued with a variable length n-gram model that creates a synthetic database which can be easily ingested by researchers. We found that there are shortcomings to the algorithm chosen in specific application to the data chosen, but differential privacy itself can still be used to release sanitized datasets while maintaining utility if the data has a low sparsity. / Master of Science / Wireless internet networks store historical logs of user device interaction with it. For example, when a phone or other wireless device connects, data is stored by the Internet Service Provider (ISP) about the device, username, time, and location of connection. A database of this type of data can help researchers analyze user trends in the network, but the data contains personally identifiable information for the users. We propose and analyze an algorithm which can release this data in a high utility manner for the researchers, yet maintain user privacy. This is based on a verifiable approach to privacy called differential privacy. This algorithm is found to provide utility and privacy protection for datasets with many users compared to the size of the network.
9

Privacy Preserving Service Discovery and Ranking For Multiple User QoS Requirements in Service-Based Software Systems

January 2011 (has links)
abstract: Service based software (SBS) systems are software systems consisting of services based on the service oriented architecture (SOA). Each service in SBS systems provides partial functionalities and collaborates with other services as workflows to provide the functionalities required by the systems. These services may be developed and/or owned by different entities and physically distributed across the Internet. Compared with traditional software system components which are usually specifically designed for the target systems and bound tightly, the interfaces of services and their communication protocols are standardized, which allow SBS systems to support late binding, provide better interoperability, better flexibility in dynamic business logics, and higher fault tolerance. The development process of SBS systems can be divided to three major phases: 1) SBS specification, 2) service discovery and matching, and 3) service composition and workflow execution. This dissertation focuses on the second phase, and presents a privacy preserving service discovery and ranking approach for multiple user QoS requirements. This approach helps service providers to register services and service users to search services through public, but untrusted service directories with the protection of their privacy against the service directories. The service directories can match the registered services with service requests, but do not learn any information about them. Our approach also enforces access control on services during the matching process, which prevents unauthorized users from discovering services. After the service directories match a set of services that satisfy the service users' functionality requirements, the service discovery approach presented in this dissertation further considers service users' QoS requirements in two steps. First, this approach optimizes services' QoS by making tradeoff among various QoS aspects with users' QoS requirements and preferences. Second, this approach ranks services based on how well they satisfy users' QoS requirements to help service users select the most suitable service to develop their SBSs. / Dissertation/Thesis / Ph.D. Computer Science 2011
10

Uma abordagem distribuída para preservação de privacidade na publicação de dados de trajetória / A distributed approach for privacy preservation in the publication of trajectory data

Brito, Felipe Timbó January 2016 (has links)
BRITO, Felipe Timbó. Uma abordagem distribuída para preservação de privacidade na publicação de dados de trajetória. 2016. 66 f. Dissertação (mestrado em computação)- Universidade Federal do Ceará, Fortaleza-CE, 2016. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-03-31T18:54:31Z No. of bitstreams: 1 2016_dis_ftbrito.pdf: 3114981 bytes, checksum: 501bbf667d876e76c74a7911fc7b2c3b (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-04-25T12:34:13Z (GMT) No. of bitstreams: 1 2016_dis_ftbrito.pdf: 3114981 bytes, checksum: 501bbf667d876e76c74a7911fc7b2c3b (MD5) / Made available in DSpace on 2016-04-25T12:34:13Z (GMT). No. of bitstreams: 1 2016_dis_ftbrito.pdf: 3114981 bytes, checksum: 501bbf667d876e76c74a7911fc7b2c3b (MD5) Previous issue date: 2016 / Advancements in mobile computing techniques along with the pervasiveness of location-based services have generated a great amount of trajectory data. These data can be used for various data analysis purposes such as traffic flow analysis, infrastructure planning, understanding of human behavior, etc. However, publishing this amount of trajectory data may lead to serious risks of privacy breach. Quasi-identifiers are trajectory points that can be linked to external information and be used to identify individuals associated with trajectories. Therefore, by analyzing quasi-identifiers, a malicious user may be able to trace anonymous trajectories back to individuals with the aid of location-aware social networking applications, for example. Most existing trajectory data anonymization approaches were proposed for centralized computing environments, so they usually present poor performance to anonymize large trajectory data sets. In this work we propose a distributed and efficient strategy that adopts the $k^m$-anonymity privacy model and uses the scalable MapReduce paradigm, which allows finding quasi-identifiers in larger amount of data. We also present a technique to minimize the loss of information by selecting key locations from the quasi-identifiers to be suppressed. Experimental evaluation results demonstrate that our proposed approach for trajectory data anonymization is more scalable and efficient than existing works in the literature. / Avanços em técnicas de computação móvel aliados à difusão de serviços baseados em localização têm gerado uma grande quantidade de dados de trajetória. Tais dados podem ser utilizados para diversas finalidades, tais como análise de fluxo de tráfego, planejamento de infraestrutura, entendimento do comportamento humano, etc. No entanto, a publicação destes dados pode levar a sérios riscos de violação de privacidade. Semi-identificadores são pontos de trajetória que podem ser combinados com informações externas e utilizados para identificar indivíduos associados à sua trajetória. Por esse motivo, analisando semi-identificadores, um usuário malicioso pode ser capaz de restaurar trajetórias anonimizadas de indivíduos por meio de aplicações de redes sociais baseadas em localização, por exemplo. Muitas das abordagens já existentes envolvendo anonimização de dados foram propostas para ambientes de computação centralizados, assim elas geralmente apresentam um baixo desempenho para anonimizar grandes conjuntos de dados de trajetória. Neste trabalho propomos uma estratégia distribuída e eficiente que adota o modelo de privacidade km-anonimato e utiliza o escalável paradigma MapReduce, o qual permite encontrar semi-identificadores em um grande volume de dados. Nós também apresentamos uma técnica que minimiza a perda de informação selecionando localizações chaves a serem removidas a partir do conjunto de semi-identificadores. Resultados de avaliação experimental demonstram que nossa solução de anonimização é mais escalável e eficiente que trabalhos já existentes na literatura.

Page generated in 0.0998 seconds