• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 32
  • 24
  • 12
  • 12
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Conditional Privacy-Preserving Authentication Protocols for Vehicular Ad Hoc Networks

Li, Jiliang 17 May 2019 (has links)
No description available.
22

Extending Complex Event Processing for Advanced Applications

Wang, Di 30 April 2013 (has links)
Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology.
23

A Study on Improving Efficiency of Privacy-Preserving Utility Mining

Wong, Jia-Wei 11 September 2012 (has links)
Utility mining algorithms have recently been proposed to discover high utility itemsets from a quantitative database. Factors such as profits or prices are concerned in measuring the utility values of purchased items for revealing more useful knowledge to managers. Nearly all the existing algorithms are performed in a batch way to extract high utility itemsets. In real-world applications, transactions may, however, be inserted, deleted or modified in a database. The batch mining procedure requires more computational time for rescanning the whole updated database to maintain the up-to-date knowledge. In the first part of this thesis, two algorithms for data insertion and data deletion are respectively proposed for efficiently updating the discovered high utility itemsets based on pre-large concepts. The proposed algorithms firstly partition itemsets into three parts with nine cases according to whether they are large (high), pre-large or small transaction-weighted utilization in the original database. Each part is then performed by its own procedure to maintain and update the discovered high utility itemsets. Based on the pre-large concepts, the original database only need to be rescanned for much fewer itemsets in the maintenance process of high utility itemsets. Besides, the risk of privacy threats usually exists in the process of data collection and data dissemination. Sensitive or personal information are required to be kept as private information before they are shared or published. Privacy-preserving utility mining (PPUM) has thus become an important issue in recent years. In the second part of this thesis, two evolutionary privacy-preserving utility mining algorithms to hide sensitive high utility itemsets in data sanitization for inserting dummy transactions and deleting transactions are respectively proposed. The two evolutionary privacy-preserving utility mining algorithms find appropriate transactions for insertion and deletion in the data-sanitization process. They adopt a flexible evaluation function with three factors. Different weights are assigned to the three factors depending on users¡¦ preference. The maintenance algorithms proposed in the first part of this thesis are also used in the GA-based approach to reduce the cost of rescanning databases, thus speeding up the evaluation process of chromosomes. Experiments are conducted as well to evaluate the performance of the proposed algorithms.
24

Uma abordagem distribuÃda para preservaÃÃo de privacidade na publicaÃÃo de dados de trajetÃria / A distributed approach for privacy preservation in the publication of trajectory data

Felipe Timbà Brito 17 December 2015 (has links)
AvanÃos em tÃcnicas de computaÃÃo mÃvel aliados à difusÃo de serviÃos baseados em localizaÃÃo tÃm gerado uma grande quantidade de dados de trajetÃria. Tais dados podem ser utilizados para diversas finalidades, tais como anÃlise de fluxo de trÃfego, planejamento de infraestrutura, entendimento do comportamento humano, etc. No entanto, a publicaÃÃo destes dados pode levar a sÃrios riscos de violaÃÃo de privacidade. Semi-identificadores sÃo pontos de trajetÃria que podem ser combinados com informaÃÃes externas e utilizados para identificar indivÃduos associados à sua trajetÃria. Por esse motivo, analisando semi-identificadores, um usuÃrio malicioso pode ser capaz de restaurar trajetÃrias anonimizadas de indivÃduos por meio de aplicaÃÃes de redes sociais baseadas em localizaÃÃo, por exemplo. Muitas das abordagens jà existentes envolvendo anonimizaÃÃo de dados foram propostas para ambientes de computaÃÃo centralizados, assim elas geralmente apresentam um baixo desempenho para anonimizar grandes conjuntos de dados de trajetÃria. Neste trabalho propomos uma estratÃgia distribuÃda e eficiente que adota o modelo de privacidade km-anonimato e utiliza o escalÃvel paradigma MapReduce, o qual permite encontrar semi-identificadores em um grande volume de dados. NÃs tambÃm apresentamos uma tÃcnica que minimiza a perda de informaÃÃo selecionando localizaÃÃes chaves a serem removidas a partir do conjunto de semi-identificadores. Resultados de avaliaÃÃo experimental demonstram que nossa soluÃÃo de anonimizaÃÃo à mais escalÃvel e eficiente que trabalhos jà existentes na literatura. / Advancements in mobile computing techniques along with the pervasiveness of location-based services have generated a great amount of trajectory data. These data can be used for various data analysis purposes such as traffic flow analysis, infrastructure planning, understanding of human behavior, etc. However, publishing this amount of trajectory data may lead to serious risks of privacy breach. Quasi-identifiers are trajectory points that can be linked to external information and be used to identify individuals associated with trajectories. Therefore, by analyzing quasi-identifiers, a malicious user may be able to trace anonymous trajectories back to individuals with the aid of location-aware social networking applications, for example. Most existing trajectory data anonymization approaches were proposed for centralized computing environments, so they usually present poor performance to anonymize large trajectory data sets. In this work we propose a distributed and efficient strategy that adopts the $k^m$-anonymity privacy model and uses the scalable MapReduce paradigm, which allows finding quasi-identifiers in larger amount of data. We also present a technique to minimize the loss of information by selecting key locations from the quasi-identifiers to be suppressed. Experimental evaluation results demonstrate that our proposed approach for trajectory data anonymization is more scalable and efficient than existing works in the literature.
25

Privacy Preserving Implementation in the E-health System

Chen, Zongzhe January 2013 (has links)
E-health systems are widely used in today’s world, and have a stillbrighter future with the rapid development of smart phones. A fewyears ago, e-health system could only be carried out on computers.But recently, people are using this as a phone application, so thatthey can get information at any time and anywhere. In addition,some smart phones can already measure heart rate and bloodpressure, for example, ‚Instant Heart Rate‛ and ‚Blood PressureMonitor‛. By using these kinds of applications, users can easilymeasure their health data and store them in their mobile phones.However, the problem of privacy has been attracting people’sattention. After uploading their data to the database, users do havethe right to protect their privacy. For instance, even the doctor hasthe authority to obtain the health record; the user's name can behidden, so that the doctor does not know who the owner of this datais. This problem also includes anonymization, pseudonymity,unlinkability, unobservability and many other aspects.In this thesis work, an android application is proposed to solve thisproblem. Users can set their own rules, and all data requests shouldbe dealt with by calling the rules. In addition, a module in the serveris to be developed to carry out the whole process of privacypreserving and the users’ data should be stored in the database.A standard for users to set rules is determined, which is bothdynamical and flexible. The application realizes some additional rulechecking functions to determine whether users have set a valid rule.Privacy rules can be created, deleted, or uploaded. In addition, userscan update their health record and upload it to the database. Theserver will call different protocols to deal with different requests, andthe data which obtains the requests is responded to by calling theusers’ own privacy rules.
26

Towards a Privacy Preserving Framework for Publishing Longitudinal Data

Sehatkar, Morvarid January 2014 (has links)
Recent advances in information technology have enabled public organizations and corporations to collect and store huge amounts of individuals' data in data repositories. Such data are powerful sources of information about an individual's life such as interests, activities, and finances. Corporations can employ data mining and knowledge discovery techniques to extract useful knowledge and interesting patterns from large repositories of individuals' data. The extracted knowledge can be exploited to improve strategic decision making, enhance business performance, and improve services. However, person-specific data often contain sensitive information about individuals and publishing such data poses potential privacy risks. To deal with these privacy issues, data must be anonymized so that no sensitive information about individuals can be disclosed from published data while distortion is minimized to ensure usefulness of data in practice. In this thesis, we address privacy concerns in publishing longitudinal data. A data set is longitudinal if it contains information of the same observation or event about individuals collected at several points in time. For instance, the data set of multiple visits of patients of a hospital over a period of time is longitudinal. Due to temporal correlations among the events of each record, potential background knowledge of adversaries about an individual in the context of longitudinal data has specific characteristics. None of the previous anonymization techniques can effectively protect longitudinal data against an adversary with such knowledge. In this thesis we identify the potential privacy threats on longitudinal data and propose a novel framework of anonymization algorithms in a way that protects individuals' privacy against both identity disclosure and attribute disclosure, and preserves data utility. Particularly, we propose two privacy models: (K,C)^P -privacy and (K,C)-privacy, and for each of these models we propose efficient algorithms for anonymizing longitudinal data. An extensive experimental study demonstrates that our proposed framework can effectively and efficiently anonymize longitudinal data.
27

A Framework for anonymous background data delivery and feedback

Timchenko, Maxim 28 October 2015 (has links)
The current state of the industry’s methods of collecting background data reflecting diagnostic and usage information are often opaque and require users to place a lot of trust in the entity receiving the data. For vendors, having a centralized database of potentially sensitive data is a privacy protection headache and a potential liability should a breach of that database occur. Unfortunately, high profile privacy failures are not uncommon, so many individuals and companies are understandably skeptical and choose not to contribute any information. It is a shame, since the data could be used for improving reliability, or getting stronger security, or for valuable academic research into real-world usage patterns. We propose, implement and evaluate a framework for non-realtime anonymous data collection, aggregation for analysis, and feedback. Departing from the usual “trusted core” approach, we aim to maintain reporters’ anonymity even if the centralized part of the system is compromised. We design a peer-to-peer mix network and its protocol that are tuned to the properties of background diagnostic traffic. Our system delivers data to a centralized repository while maintaining (i) source anonymity, (ii) privacy in transit, and (iii) the ability to provide analysis feedback back to the source. By removing the core’s ability to identify the source of data and to track users over time, we drastically reduce its attractiveness as a potential attack target and allow vendors to make concrete and verifiable privacy and anonymity claims.
28

Security and privacy aspects of mobile applications for post-surgical care

Meng, Xianrui 22 January 2016 (has links)
Mobile technologies have the potential to improve patient monitoring, medical decision making and in general the efficiency and quality of health delivery. They also pose new security and privacy challenges. The objectives of this work are to (i) Explore and define security and privacy requirements on the example of a post-surgical care application, and (ii) Develop and test a pilot implementation Post-Surgical Care Studies of surgical out- comes indicate that timely treatment of the most common complications in compliance with established post-surgical regiments greatly improve success rates. The goal of our pilot application is to enable physician to optimally synthesize and apply patient directed best medical practices to prevent post-operative complications in an individualized patient/procedure specific fashion. We propose a framework for a secure protocol to enable doctors to check most common complications for their patient during in-hospital post- surgical care. We also implemented our construction and cryptographic protocols as an iPhone application on the iOS using existing cryptographic services and libraries.
29

Adaptable Privacy-preserving Model

Brown, Emily Elizabeth 01 January 2019 (has links)
Current data privacy-preservation models lack the ability to aid data decision makers in processing datasets for publication. The proposed algorithm allows data processors to simply provide a dataset and state their criteria to recommend an xk-anonymity approach. Additionally, the algorithm can be tailored to a preference and gives the precision range and maximum data loss associated with the recommended approach. This dissertation report outlined the research’s goal, what barriers were overcome, and the limitations of the work’s scope. It highlighted the results from each experiment conducted and how it influenced the creation of the end adaptable algorithm. The xk-anonymity model built upon two foundational privacy models, the k-anonymity and l-diversity models. Overall, this study had many takeaways on data and its power in a dataset.
30

Cooperative planning in multi-agent systems

Torreño Lerma, Alejandro 14 June 2016 (has links)
Tesis por compendio / [EN] Automated planning is a centralized process in which a single planning entity, or agent, synthesizes a course of action, or plan, that satisfies a desired set of goals from an initial situation. A Multi-Agent System (MAS) is a distributed system where a group of autonomous agents pursue their own goals in a reactive, proactive and social way. Multi-Agent Planning (MAP) is a novel research field that emerges as the integration of automated planning in MAS. Agents are endowed with planning capabilities and their mission is to find a course of action that attains the goals of the MAP task. MAP generalizes the problem of automated planning in domains where several agents plan and act together by combining their knowledge, information and capabilities. In cooperative MAP, agents are assumed to be collaborative and work together towards the joint construction of a competent plan that solves a set of common goals. There exist different methods to address this objective, which vary according to the typology and coordination needs of the MAP task to solve; that is, to which extent agents are able to make their own local plans without affecting the activities of the other agents. The present PhD thesis focuses on the design, development and experimental evaluation of a general-purpose and domain-independent resolution framework that solves cooperative MAP tasks of different typology and complexity. More precisely, our model performs a multi-agent multi-heuristic search over a plan space. Agents make use of an embedded search engine based on forward-chaining Partial Order Planning to successively build refinement plans starting from an initial empty plan while they jointly explore a multi-agent search tree. All the reasoning processes, algorithms and coordination protocols are fully distributed among the planning agents and guarantee the preservation of the agents' private information. The multi-agent search is guided through the alternation of two state-based heuristic functions. These heuristic estimators use the global information on the MAP task instead of the local projections of the task of each agent. The experimental evaluation shows the effectiveness of our multi-heuristic search scheme, obtaining significant results in a wide variety of cooperative MAP tasks adapted from the benchmarks of the International Planning Competition. / [ES] La planificación automática es un proceso centralizado en el que una única entidad de planificación, o agente, sintetiza un curso de acción, o plan, que satisface un conjunto deseado de objetivos a partir de una situación inicial. Un Sistema Multi-Agente (SMA) es un sistema distribuido en el que un grupo de agentes autónomos persiguen sus propias metas de forma reactiva, proactiva y social. La Planificación Multi-Agente (PMA) es un nuevo campo de investigación que surge de la integración de planificación automática en SMA. Los agentes disponen de capacidades de planificación y su propósito consiste en generar un curso de acción que alcance los objetivos de la tarea de PMA. La PMA generaliza el problema de planificación automática en dominios en los que diversos agentes planifican y actúan conjuntamente mediante la combinación de sus conocimientos, información y capacidades. En PMA cooperativa, se asume que los agentes son colaborativos y trabajan conjuntamente para la construcción de un plan competente que resuelva una serie de objetivos comunes. Existen distintos métodos para alcanzar este objetivo que varían de acuerdo a la tipología y las necesidades de coordinación de la tarea de PMA a resolver; esto es, hasta qué punto los agentes pueden generar sus propios planes locales sin afectar a las actividades de otros agentes. La presente tesis doctoral se centra en el diseño, desarrollo y evaluación experimental de una herramienta independiente del dominio y de propósito general para la resolución de tareas de PMA cooperativa de distinta tipología y nivel de complejidad. Particularmente, nuestro modelo realiza una búsqueda multi-agente y multi-heurística sobre el espacio de planes. Los agentes hacen uso de un motor de búsqueda embebido basado en Planificación de Orden Parcial de encadenamiento progresivo para generar planes refinamiento de forma sucesiva mientras exploran conjuntamente el árbol de búsqueda multiagente. Todos los procesos de razonamiento, algoritmos y protocolos de coordinación están totalmente distribuidos entre los agentes y garantizan la preservación de la información privada de los agentes. La búsqueda multi-agente se guía mediante la alternancia de dos funciones heurísticas basadas en estados. Estos estimadores heurísticos utilizan la información global de la tarea de PMA en lugar de las proyecciones locales de la tarea de cada agente. La evaluación experimental muestra la efectividad de nuestro esquema de búsqueda multi-heurístico, que obtiene resultados significativos en una amplia variedad de tareas de PMA cooperativa adaptadas a partir de los bancos de pruebas de las Competición Internacional de Planificación. / [CA] La planificació automàtica és un procés centralitzat en el que una única entitat de planificació, o agent, sintetitza un curs d'acció, o pla, que satisfau un conjunt desitjat d'objectius a partir d'una situació inicial. Un Sistema Multi-Agent (SMA) és un sistema distribuït en el que un grup d'agents autònoms persegueixen les seues pròpies metes de forma reactiva, proactiva i social. La Planificació Multi-Agent (PMA) és un nou camp d'investigació que sorgeix de la integració de planificació automàtica en SMA. Els agents estan dotats de capacitats de planificació i el seu propòsit consisteix en generar un curs d'acció que aconseguisca els objectius de la tasca de PMA. La PMA generalitza el problema de planificació automàtica en dominis en què diversos agents planifiquen i actúen conjuntament mitjançant la combinació dels seus coneixements, informació i capacitats. En PMA cooperativa, s'assumeix que els agents són col·laboratius i treballen conjuntament per la construcció d'un pla competent que ressolga una sèrie d'objectius comuns. Existeixen diferents mètodes per assolir aquest objectiu que varien d'acord a la tipologia i les necessitats de coordinació de la tasca de PMA a ressoldre; és a dir, fins a quin punt els agents poden generar els seus propis plans locals sense afectar a les activitats d'altres agents. La present tesi doctoral es centra en el disseny, desenvolupament i avaluació experimental d'una ferramenta independent del domini i de propòsit general per la resolució de tasques de PMA cooperativa de diferent tipologia i nivell de complexitat. Particularment, el nostre model realitza una cerca multi-agent i multi-heuristica sobre l'espai de plans. Els agents fan ús d'un motor de cerca embegut en base a Planificació d'Ordre Parcial d'encadenament progressiu per generar plans de refinament de forma successiva mentre exploren conjuntament l'arbre de cerca multiagent. Tots els processos de raonament, algoritmes i protocols de coordinació estan totalment distribuïts entre els agents i garanteixen la preservació de la informació privada dels agents. La cerca multi-agent es guia mitjançant l'aternança de dues funcions heurístiques basades en estats. Aquests estimadors heurístics utilitzen la informació global de la tasca de PMA en lloc de les projeccions locals de la tasca de cada agent. L'avaluació experimental mostra l'efectivitat del nostre esquema de cerca multi-heurístic, que obté resultats significatius en una ampla varietat de tasques de PMA cooperativa adaptades a partir dels bancs de proves de la Competició Internacional de Planificació. / Torreño Lerma, A. (2016). Cooperative planning in multi-agent systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/65815 / TESIS / Premios Extraordinarios de tesis doctorales / Compendio

Page generated in 0.0795 seconds