• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 8
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 60
  • 33
  • 32
  • 18
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Der Schutz der Privatsphäre bei der Anfragebearbeitung in Datenbanksystemen

Dölle, Lukas 13 June 2016 (has links)
In den letzten Jahren wurden viele Methoden entwickelt, um den Schutz der Privatsphäre bei der Veröffentlichung von Daten zu gewährleisten. Die meisten Verfahren anonymisieren eine gesamte Datentabelle, sodass sensible Werte einzelnen Individuen nicht mehr eindeutig zugeordnet werden können. Deren Privatsphäre gilt als ausreichend geschützt, wenn eine Menge von mindestens k sensiblen Werten existiert, aus der potentielle Angreifer den tatsächlichen Wert nicht herausfinden können. Ausgangspunkt für die vorliegende Arbeit ist eine Sequenz von Anfragen auf personenbezogene Daten, die durch ein Datenbankmanagementsystem mit der Rückgabe einer Menge von Tupeln beantwortet werden. Das Ziel besteht darin herauszufinden, ob Angreifer durch die Kenntnis aller Ergebnisse in der Lage sind, Individuen eindeutig ihre sensiblen Werte zuzuordnen, selbst wenn alle Ergebnismengen anonymisiert sind. Bisher sind Verfahren nur für aggregierte Anfragen wie Summen- oder Durchschnittsbildung bekannt. Daher werden in dieser Arbeit Ansätze entwickelt, die den Schutz auch für beliebige Anfragen gewährleisten. Es wird gezeigt, dass die Lösungsansätze auf Matchingprobleme in speziellen Graphen zurückgeführt werden können. Allerdings ist das Bestimmen größter Matchings in diesen Graphen NP-vollständig. Aus diesem Grund werden Approximationsalgorithmen vorgestellt, die in Polynomialzeit eine Teilmenge aller Matchings konstruieren, ohne die Privatsphäre zu kompromittieren. / Over the last ten years many techniques for privacy-preserving data publishing have been proposed. Most of them anonymize a complete data table such that sensitive values cannot clearly be assigned to individuals. Their privacy is considered to be adequately protected, if an adversary cannot discover the actual value from a given set of at least k values. For this thesis we assume that users interact with a data base by issuing a sequence of queries against one table. The system returns a sequence of results that contains sensitive values. The goal of this thesis is to check if adversaries are able to link uniquely sensitive values to individuals despite anonymized result sets. So far, there exist algorithms to prevent deanonymization for aggregate queries. Our novel approach prevents deanonymization for arbitrary queries. We show that our approach can be transformed to matching problems in special graphs. However, finding maximum matchings in these graphs is NP-complete. Therefore, we develop several approximation algorithms, which compute specific matchings in polynomial time, that still maintaining privacy.
52

Towards better privacy preservation by detecting personal events in photos shared within online social networks / Vers une meilleure protection de la vie privée par la détection d'événements dans les photos partagées sur les réseaux sociaux

Raad, Eliana 04 December 2015 (has links)
De nos jours, les réseaux sociaux ont considérablement changé la façon dont les personnes prennent des photos qu’importe le lieu, le moment, le contexte. Plus que 500 millions de photos sont partagées chaque jour sur les réseaux sociaux, auxquelles on peut ajouter les 200 millions de vidéos échangées en ligne chaque minute. Plus particulièrement, avec la démocratisation des smartphones, les utilisateurs de réseaux sociaux partagent instantanément les photos qu’ils prennent lors des divers événements de leur vie, leurs voyages, leurs aventures, etc. Partager ce type de données présente un danger pour la vie privée des utilisateurs et les expose ensuite à une surveillance grandissante. Ajouté à cela, aujourd’hui de nouvelles techniques permettent de combiner les données provenant de plusieurs sources entre elles de façon jamais possible auparavant. Cependant, la plupart des utilisateurs des réseaux sociaux ne se rendent même pas compte de la quantité incroyable de données très personnelles que les photos peuvent renfermer sur eux et sur leurs activités (par exemple, le cas du cyberharcèlement). Cela peut encore rendre plus difficile la possibilité de garder l’anonymat sur Internet dans de nombreuses situations où une certaine discrétion est essentielle (politique, lutte contre la fraude, critiques diverses, etc.).Ainsi, le but de ce travail est de fournir une mesure de protection de la vie privée, visant à identifier la quantité d’information qui permettrait de ré-identifier une personne en utilisant ses informations personnelles accessibles en ligne. Premièrement, nous fournissons un framework capable de mesurer le risque éventuel de ré-identification des personnes et d’assainir les documents multimédias destinés à être publiés et partagés. Deuxièmement, nous proposons une nouvelle approche pour enrichir le profil de l’utilisateur dont on souhaite préserver l’anonymat. Pour cela, nous exploitons les évènements personnels à partir des publications des utilisateurs et celles partagées par leurs contacts sur leur réseau social. Plus précisément, notre approche permet de détecter et lier les évènements élémentaires des personnes en utilisant les photos (et leurs métadonnées) partagées au sein de leur réseau social. Nous décrivons les expérimentations que nous avons menées sur des jeux de données réelles et synthétiques. Les résultats montrent l’efficacité de nos différentes contributions. / Today, social networking has considerably changed why people are taking pictures all the time everywhere they go. More than 500 million photos are uploaded and shared every day, along with more than 200 hours of videos every minute. More particularly, with the ubiquity of smartphones, social network users are now taking photos of events in their lives, travels, experiences, etc. and instantly uploading them online. Such public data sharing puts at risk the users’ privacy and expose them to a surveillance that is growing at a very rapid rate. Furthermore, new techniques are used today to extract publicly shared data and combine it with other data in ways never before thought possible. However, social networks users do not realize the wealth of information gathered from image data and which could be used to track all their activities at every moment (e.g., the case of cyberstalking). Therefore, in many situations (such as politics, fraud fighting and cultural critics, etc.), it becomes extremely hard to maintain individuals’ anonymity when the authors of the published data need to remain anonymous.Thus, the aim of this work is to provide a privacy-preserving constraint (de-linkability) to bound the amount of information that can be used to re-identify individuals using online profile information. Firstly, we provide a framework able to quantify the re-identification threat and sanitize multimedia documents to be published and shared. Secondly, we propose a new approach to enrich the profile information of the individuals to protect. Therefore, we exploit personal events in the individuals’ own posts as well as those shared by their friends/contacts. Specifically, our approach is able to detect and link users’ elementary events using photos (and related metadata) shared within their online social networks. A prototype has been implemented and several experiments have been conducted in this work to validate our different contributions.
53

Webová vizualizace a demonstrátor anonymních pověření / Web visualization and demonstrator of anonymous credentials

Chwastková, Šárka January 2021 (has links)
This thesis deals with the topic of attribute based credentials with revocable anonymous credentials. The main focus of this work is the implementation of this scheme through a web application. The web application serves primarily as a visualization, which shows the functionality of this scheme through animations, and also as a practical demonstrator. Data and cryptographic calculations for individual system protocols are provided by the given cryptographic C application that communicates with the created application. The web application is also able to communicate with the connected smart card reader and the MultOS smart card and thus create the transmission of APDU commands and responses between the smart card and provided C application.
54

Korelace dat na vstupu a výstupu sítě Tor / Correlation of Inbound and Outbound Traffic of Tor Network

Coufal, Zdeněk January 2014 (has links)
Communication in public networks based on the IP protocol is not really anonymous because it is possible to determine the source and destination IP address of each packet. Users who want to be anonymous are forced to use anonymization networks, such as Tor. In case such a user is target of lawful interception, it presents a problem for those systems because they only see that the user communicated with anonymization network and have a suspicion that the data stream at the output of anonymization network belong to the same user. The aim of this master thesis was to design a correlation method to determine the dependence of the data stream at the input and the output of the Tor network. The proposed method analysis network traffic and compares characteristics of data streams extracted from metadata, such as time of occurence and the size of packets. This method specializes in correlating data flows of protocol HTTP, specifically web server responses. It was tested on real data from the Tor network and successfully recognized dependency of data flows.
55

Synthetic Graph Generation at Scale : A novel framework for generating large graphs using clustering, generative models and node embeddings / Storskalig generering av syntetiska grafer : En ny arkitektur för att tillverka stora grafer med hjälp av klustring, generativa modeller och nodinbäddningar

Hammarstedt, Johan January 2022 (has links)
The field of generative graph models has seen increased popularity during recent years as it allows us to model the underlying distribution of a network and thus recreate it. From allowing anonymization of sensitive information in social networks to data augmentation of rare diseases in the brain, the ability to generate synthetic data has multiple applications in various domains. However, most current methods face the bottleneck of trying to generate the entire adjacency matrix and are thus limited to graphs with less than tens of thousands of nodes. In contrast, large real-world graphs like social networks or transaction graphs can extend significantly beyond these boundaries. Furthermore, the current scalable approaches are predominantly based on stochasticity and do not capture local structures and communities. In this paper, we propose Graphwave Edge-Linking CELL or GELCELL, a novel three-step architecture for generating graphs at scale. First, instead of constructing the entire network, GELCELL partitions the data and generates each cluster separately, allowing for efficient and parallelizable training. Then, by encoding the nodes, it trains a classifier to predict the edges between the partitions to patch them together, creating a synthetic version of the original large graph. Although it does suffer from some limitations due to necessary constraints on the cluster sizes, the results showed that GELCELL, given optimized parameters, can produce graphs with reasonable accuracy on all data tested, with the largest having 400 000 nodes and 1 000 000 edges. / Generativa grafmodeller har sett ökad popularitet under de senaste åren eftersom det möjliggör modellering av grafens underliggande distribution, och vi kan på så sätt återskapa liknande kopior. Förmågan att generera syntetisk data har ett flertal applikationsområden i en mängd av områden, allt från att möjligöra anonymisering av känslig data i sociala nätverk till att utöka mängden tillgänglig data av ovanliga hjärnsjukdomar. Dagens metoder har länge varit begränsade till grafer med under tiotusental noder, då dessa inte är tillräckligt skalbara, men grafer som sociala nätverk eller transaktionsgrafer kan sträcka sig långt utöver dessa gränser. Dessutom är de nuvarande skalbara tillvägagångssätten till största delen baserade på stokasticitet och fångar inte lokala strukturer och kluster. I denna rapport föreslår vi ”Graphwave EdgeLinking CELL” eller GELCELL, en trestegsarkitektur för att generera grafer i större skala. Istället för att återskapa hela grafen direkt så partitionerar GELCELL all datat och genererar varje kluster separat, vilket möjliggör både effektiv och parallelliserbar träning. Vi kan sedan koppla samman grafen genom att koda noderna och träna en modell för att prediktera länkarna mellan kluster och återskapa en syntetisk version av originalet. Metoden kräver vissa antaganden gällande max-storleken på dess kluster men är flexibel och kan rymma domänkännedom om en specifik graf i form av informerad parameterinställning. Trots detta visar resultaten på varierade träningsdata att GELCELL, givet optimerade parametrar, är kapabel att genera grafer med godtycklig precision upp till den största beprövade grafen med 400 000 noder och 1 000 000 länkar.
56

Autonomous Priority Based Routing for Online Social Networks

Othman, Salem 14 June 2018 (has links)
No description available.
57

GARBLED COMPUTATION: HIDING SOFTWARE, DATAAND COMPUTED VALUES

Shoaib Amjad Khan (19199497) 27 July 2024 (has links)
<p dir="ltr">This thesis presents an in depth study and evaluation of a class of secure multiparty protocols that enable execution of a confidential software program $\mathcal{P}$ owned by Alice, on confidential data $\mathcal{D}$ owned by Bob, without revealing anything about $\mathcal{P}$ or $\mathcal{D}$ in the process. Our initial adverserial model is an honest-but-curious adversary, which we later extend to a malicious adverarial setting. Depending on the requirements, our protocols can be set up such that the output $\mathcal{P(D)}$ may only be learned by Alice, Bob, both, or neither (in which case an agreed upon third party would learn it). Most of our protocols are run by only two online parties which can be Alice and Bob, or alternatively they could be two commodity cloud servers (in which case neither Alice nor Bob participate in the protocols' execution - they merely initialize the two cloud servers, then go offline). We implemented and evaluated some of these protocols as prototypes that we made available to the open source community via Github. We report our experimental findings that compare and contrast the viability of our various approaches and those that already exist. All our protocols achieve the said goals without revealing anything other than upper bounds on the sizes of program and data.</p><p><br></p>
58

Porovnání přístupů ke generování umělých dat / Comparison of Approaches to Synthetic Data Generation

Šejvlová, Ludmila January 2017 (has links)
The diploma thesis deals with synthetic data, selected approaches to their generation together with a practical task of data generation. The goal of the thesis is to describe the selected approaches to data generation, capture their key advantages and disadvantages and compare the individual approaches to each other. The practical part of the thesis describes generation of synthetic data for teaching knowledge discovery using databases. The thesis includes a basic description of synthetic data and thoroughly explains the process of their generation. The approaches selected for further examination are random data generation, the statistical approach, data generation languages and the ReverseMiner tool. The thesis also describes the practical usage of synthetic data and the suitability of each approach for certain purposes. Within this thesis, educational data Hotel SD were created using the ReverseMiner tool. The data contain relations discoverable with SD (set-difference) GUHA-procedures.
59

Competition and Data Protection Law in Conflict : Data Protection as a Justification for Anti-Competitive Conduct and a Consideration in Designing Competition Law Remedies

Bornudd, David January 2022 (has links)
Competition and data protection law are two powerful regimes simultaneously shaping the use of digital information, which has given rise to new interactions between these areas of law. While most views on this intersection emphasize that competition and data protection law must work together, nascent developments indicate that these legal regimes may sometimes conflict.  In the first place, firms faced with antitrust allegations are to an increasing extent invoking the need to protect the privacy of their users to justify their impugned conduct. Here, the conduct could either be prohibited by competition law despite of data protection or justified under competition law because of data protection. In the EU, no such justification attempt has reached court-stage, and it remains unclear how an enforcer ought to deal with such a claim. In the second place, competition law can mandate a firm to provide access to commercially valuable personal data to its rivals under a competition law remedy. Where that is the case, the question arising in this connection is whether an enforcer can and should design the remedy in a way that aligns with data protection law. If so, the issue remains of how that ought to be done. The task of the thesis has been to explore these issues, legally, economically, and coherently.  The thesis has rendered four main conclusions. First, data protection has a justified role in EU competition law in two ways. On the one hand, enhanced data protection can increase the quality of a service and may thus be factored in the competitive analysis as a dimension of quality. On the other, data protection as a human right must be guaranteed in the application of competition law. Second, these perspectives can be squared with the criteria for justifying competition breaches, in that data protection can be invoked to exculpate a firm from antitrust allegations. Third, in that context, the human rights dimension of data protection may entail that the enforcer must consider data protection even if it is not invoked. However, allowing data protection interests to override competition law in this manner is relatively inefficient as it may lead to less innovation, higher costs, and lower revenues. Fourth, the profound importance of data protection in the EU necessarily means that enforcers should accommodate data protection interests in designing competition law remedies which mandate access to personal data. This may be done in several ways, including requirements to anonymize data before providing access, or to oblige the firm to be compliant with data protection law in the process of providing access. The analysis largely confirms that anonymization is the preferable option.
60

Examining the Privacy Aspects and Cost-Quality Balance of a Public Sector Conversational Interface

Meier Ström, Theo, Vesterlund, Marcus January 2024 (has links)
This thesis explores the implementation of a conversational user interface for Uppsala Municipality, aimed at optimising the balance between cost of usage and quality when using large language models for public services. The central issue addressed is the effective integration of large language models, such as OpenAI's GPT-4, to enhance municipal services without compromising user privacy and data security. The solution developed involves a prototype that utilises a model chooser and prompt tuner, allowing the interface to adapt the complexity of responses based on user input. This adaptive approach reduces costs while maintaining high response quality. The results indicate that the prototype not only manages costs effectively, but also adheres to standards of data privacy and security. Clear information on data use and transparency improved user trust and understanding. In addition, strategies were effectively implemented to handle sensitive and unexpected input, improving overall data security. Overall, the findings suggest that this approach to implementing conversational user interfaces in public services is viable, offering valuable insights into the cost-effective and secure integration of language models in the public sector. The success of the prototype highlights its potential to improve future municipal services, underscoring the importance of transparency and user engagement in public digital interfaces. / Den här masteruppsatsen undersöker implementeringen av ett konversationsgränssnitt för Uppsala kommun, med målet att optimera balansen mellan kostnad och kvalitet vid användning av stora språkmodeller för den offentliga sektorn. Den centrala frågan som besvaras är hur stora språkmodeller, såsom OpenAI:s GPT-4, kan integreras för att förbättra kommunala tjänster utan att kompromissa med användarnas integritet och datasäkerhet. Den utvecklade lösningen innefattar en prototyp som använder en modellväljare och promptjusterare, vilket gör det möjligt för gränssnittet att anpassa svarens komplexitet baserat på användarens meddelande. Detta tillvägagångssätt reducerar kostnaderna samtidigt som en hög svarskvalitet bibehålls. Resultaten visar att prototypen inte bara hanterar kostnaderna effektivt, utan också upprätthåller standarder för datasekretess och säkerhet. Tydlig information om dataanvändning och transparens förbättrade avsevärt användarnas förtroende och förståelse. Dessutom implementerades strategier effektivt för att hantera känslig och oväntad data, vilket förbättrade den övergripande datasäkerheten. Sammanfattningsvis tyder resultaten på att detta tillvägagångssätt för implementering av konversationsgränssnitt i offentliga tjänster är möjligt och erbjuder lärdomar om kostnadseffektiv och säker integration av språkmodeller i offentlig sektor. Prototypens framgång påvisar dess potential att förbättra framtida kommunala tjänster, men lyfter också vikten av transparens och användarengagemang i offentliga digitala gränssnitt.

Page generated in 0.0711 seconds