• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 81
  • 29
  • 22
  • 19
  • 18
  • 13
  • 11
  • 11
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Research in methods for achieving secure voice anonymization : Evaluation and improvement of voice anonymization techniques for whistleblowing / Forskning i metoder för säker röstanonymisering : Utvärdering och förbättring av röstanonymiseringstekniker för visselblåsning

Hellman, Erik, Nordstrand, Mattias January 2022 (has links)
Safe whistleblowing within companies could give a more transparent and open society, and keeping the whistleblower safe is key, this has led to a new EU Whistleblowing Directive requiring each organization with more than 249 employees to provide an internal channel for whistleblowing before 17 July 2022. A whistleblowing service within an entity should provide secure communication for the organization and its employees. One way to make whistleblowing more accessible is by providing a service for verbal reporting, for example by recording and sending voice messages. However, ensuring that the speaker is secure and can feel anonymous can be difficult since speech varies between individuals - different accents, pitch, or the speed of the voice are examples of factors that a speaker can be identified by. Common ways of voice anonymization, that you hear on the news for example, can often be backtracked or in other ways be deanonymized such that the speaker’s identity is revealed, especially for people who know the speaker. Today we have many developing technologies, such as machine learning, which could be used to greatly improve anonymity or deanonymization. However, greater anonymity is often costly with regard to the intelligibility and sometimes the naturalness of the voice content. Therefore, we studied and evaluated a number of anonymization methods with respect to anonymity, intelligibility, and overall user-friendliness. The aim of this was to map what anonymization methods are suitable for whistleblowing and implement proof of concepts of such an anonymizer. The results show differences between anonymization methods and that some perform better than others, but in different ways. Different methods should be selected depending on the perceived threat. We designed working proof of concepts that can be used in a whistleblowing service and present when respective solutions could be used. Our work shows ways for securer whistleblowing and will be a basis for future work and implementation for the host company Nebulr. / Säker visselblåsning inom företag skulle kunna ge ett mer transparent och öppet samhälle, och att hålla visselblåsaren säker är fundamentalt viktigt, varpå ett nytt EU-direktiv för visselblåsning har formats. Detta direktiv kräver att varje verksamhet med fler än 249 anställda tillhandahåller en intern kanal för visselblåsning före den 17 juli 2022. En tjänst för visselblåsning inom en verksamhet bör tillhandahålla trygg kommunikation för organisationen och dess anställda. Ett sätt att göra visselblåsning mer tillgängligt är genom att tillhandahålla en tjänst för muntligt rapportering, till exempel genom att spela in och skicka röstmeddelanden. Att se till att talaren kan känna sig anonym och trygg kan dock vara svårt eftersom tal skiljer sig mellan individer – olika dialekter, tonhöjd eller tempo är exempel på faktorer som man kan identifieras genom. Vanliga sätt att anonymisera rösten, som man till exempel hör på nyheterna, kan ofta spåras tillbaka eller på andra sätt deanonymiseras så att identiteten avslöjas, särskilt för personer som känner talaren. Idag har vi många teknologier som fortfarande utvecklas och förbättras i det växande området information och kommunikationsteknik, exempelvis maskininlärning, som kan användas för att förbättra anonymiteten. Men mer anonymitet kommer ofta på bekostnad av förståeligheten och ibland röstens naturlighet. Därför studerade och utvärderade vi olika anonymiseringsmetoder utifrån anonymitet, förståelighet och användarvänlighet överlag. Syftet med detta var att kartlägga vilka anonymiseringsmetoder som är lämpliga för visselblåsning och implementera proof of concepts av anonymiserare. Vårt resultat visar på skillnader mellan olika anonymiseringsmetoder och att vissa metoder presterar bättre än andra, men på olika sätt. Olika metoder bör användas beroende på den upplevda hotbilden och vad man eftersträvar. Vi skapade proof-of-concepts för de metoder vi undersökt och beskriver när och för vilka situationer som respektive metod skulle kunna användas. Vårt arbete visar hur man kan uppnå säkrare visselblåsning och kommer att ligga till grund för framtida utveckling och implementering för företaget Nebulr.
62

Decentralized Large-Scale Natural Language Processing Using Gossip Learning / Decentraliserad Storskalig Naturlig Språkbehandling med Hjälp av Skvallerinlärning

Alkathiri, Abdul Aziz January 2020 (has links)
The field of Natural Language Processing in machine learning has seen rising popularity and use in recent years. The nature of Natural Language Processing, which deals with natural human language and computers, has led to the research and development of many algorithms that produce word embeddings. One of the most widely-used of these algorithms is Word2Vec. With the abundance of data generated by users and organizations and the complexity of machine learning and deep learning models, performing training using a single machine becomes unfeasible. The advancement in distributed machine learning offers a solution to this problem. Unfortunately, due to reasons concerning data privacy and regulations, in some real-life scenarios, the data must not leave its local machine. This limitation has lead to the development of techniques and protocols that are massively-parallel and data-private. The most popular of these protocols is federated learning. However, due to its centralized nature, it still poses some security and robustness risks. Consequently, this led to the development of massively-parallel, data private, decentralized approaches, such as gossip learning. In the gossip learning protocol, every once in a while each node in the network randomly chooses a peer for information exchange, which eliminates the need for a central node. This research intends to test the viability of gossip learning for large- scale, real-world applications. In particular, it focuses on implementation and evaluation for a Natural Language Processing application using gossip learning. The results show that application of Word2Vec in a gossip learning framework is viable and yields comparable results to its non-distributed, centralized counterpart for various scenarios, with an average loss on quality of 6.904%. / Fältet Naturlig Språkbehandling (Natural Language Processing eller NLP) i maskininlärning har sett en ökande popularitet och användning under de senaste åren. Naturen av Naturlig Språkbehandling, som bearbetar naturliga mänskliga språk och datorer, har lett till forskningen och utvecklingen av många algoritmer som producerar inbäddningar av ord. En av de mest använda av dessa algoritmer är Word2Vec. Med överflödet av data som genereras av användare och organisationer, komplexiteten av maskininlärning och djupa inlärningsmodeller, blir det omöjligt att utföra utbildning med hjälp av en enda maskin. Avancemangen inom distribuerad maskininlärning erbjuder en lösning på detta problem, men tyvärr får data av sekretesskäl och datareglering i vissa verkliga scenarier inte lämna sin lokala maskin. Denna begränsning har lett till utvecklingen av tekniker och protokoll som är massivt parallella och dataprivata. Det mest populära av dessa protokoll är federerad inlärning (federated learning), men på grund av sin centraliserade natur utgör det ändock vissa säkerhets- och robusthetsrisker. Följaktligen ledde detta till utvecklingen av massivt parallella, dataprivata och decentraliserade tillvägagångssätt, såsom skvallerinlärning (gossip learning). I skvallerinlärningsprotokollet väljer varje nod i nätverket slumpmässigt en like för informationsutbyte, vilket eliminerarbehovet av en central nod. Syftet med denna forskning är att testa livskraftighetenav skvallerinlärning i större omfattningens verkliga applikationer. I synnerhet fokuserar forskningen på implementering och utvärdering av en NLP-applikation genom användning av skvallerinlärning. Resultaten visar att tillämpningen av Word2Vec i en skvallerinlärnings ramverk är livskraftig och ger jämförbara resultat med dess icke-distribuerade, centraliserade motsvarighet för olika scenarier, med en genomsnittlig kvalitetsförlust av 6,904%.
63

ANTECEDENTS AND OUTCOMES OF PERCEIVED CREEPINESS IN ONLINE PERSONALIZED COMMUNICATIONS

Stevens, Arlonda M. 01 June 2016 (has links)
No description available.
64

Exploring User Trust in Natural Language Processing Systems : A Survey Study on ChatGPT Users

Aronsson Bünger, Morgan January 2024 (has links)
ChatGPT has become a popular technology among people and gained a considerable user base, because of its power to effectively generate responses to users requests. However, as ChatGPT’s popularity has grown and as other natural language processing systems (NLPs) are being developed and adopted, several concerns have been raised about the technology that could have implications on user trust. Because trust plays a central role in user willingness to adopt artificial intelligence (AI) systems and there is no consensus in research on what facilitates trust, it is important to conduct more research to identify the factors that affect user trust in artificial intelligence systems, especially modern technologies such as NLPs. The aim of the study was therefore to identify the factors that affect user trust in NLPs. The findings from the literature within trust and artificial intelligence indicated that there may exist a relationship between trust and transparency, explainability, accuracy, reliability, automation, augmentation, anthropomorphism and data privacy. These factors were quantitatively studied together in order to uncover what affects user trust in NLPs. The result from the study indicated that transparency, accuracy, reliability, automation, augmentation, anthropomorphism and data privacy all have a positive impact on user trust in NLPs, which both supported and opposed previous findings from literature.
65

Our Humanity Exposed : Predictive Modelling in a Legal Context

Greenstein, Stanley January 2017 (has links)
This thesis examines predictive modelling from the legal perspective. Predictive modelling is a technology based on applied statistics, mathematics, machine learning and artificial intelligence that uses algorithms to analyse big data collections, and identify patterns that are invisible to human beings. The accumulated knowledge is incorporated into computer models, which are then used to identify and predict human activity in new circumstances, allowing for the manipulation of human behaviour. Predictive models use big data to represent people. Big data is a term used to describe the large amounts of data produced in the digital environment. It is growing rapidly due mainly to the fact that individuals are spending an increasing portion of their lives within the on-line environment, spurred by the internet and social media. As individuals make use of the on-line environment, they part with information about themselves. This information may concern their actions but may also reveal their personality traits. Predictive modelling is a powerful tool, which private companies are increasingly using to identify business risks and opportunities. They are incorporated into on-line commercial decision-making systems, determining, among other things, the music people listen to, the news feeds they receive, the content people see and whether they will be granted credit. This results in a number of potential harms to the individual, especially in relation to personal autonomy. This thesis examines the harms resulting from predictive modelling, some of which are recognized by traditional law. Using the European legal context as a point of departure, this study ascertains to what extent legal regimes address the use of predictive models and the threats to personal autonomy. In particular, it analyses Article 8 of the European Convention on Human Rights (ECHR) and the forthcoming General Data Protection Regulation (GDPR) adopted by the European Union (EU). Considering the shortcomings of traditional legal instruments, a strategy entitled ‘empowerment’ is suggested. It comprises components of a legal and technical nature, aimed at levelling the playing field between companies and individuals in the commercial setting. Is there a way to strengthen humanity as predictive modelling continues to develop?
66

Data Protection in Transit and at Rest with Leakage Detection

Denis A Ulybyshev (6620474) 15 May 2019 (has links)
<p>In service-oriented architecture, services can communicate and share data among themselves. This thesis presents a solution that allows detecting several types of data leakages made by authorized insiders to unauthorized services. My solution provides role-based and attribute-based access control for data so that each service can access only those data subsets for which the service is authorized, considering a context and service’s attributes such as security level of the web browser and trust level of service. My approach provides data protection in transit and at rest for both centralized and peer-to-peer service architectures. The methodology ensures confidentiality and integrity of data, including data stored in untrusted cloud. In addition to protecting data against malicious or curious cloud or database administrators, the capability of running a search through encrypted data, using SQL queries, and building analytics over encrypted data is supported. My solution is implemented in the “WAXEDPRUNE” (Web-based Access to Encrypted Data Processing in Untrusted Environments) project, funded by Northrop Grumman Cybersecurity Research Consortium. WAXEDPRUNE methodology is illustrated in this thesis for two use cases, including a Hospital Information System with secure storage and exchange of Electronic Health Records and a Vehicle-to-Everything communication system with secure exchange of vehicle’s and drivers’ data, as well as data on road events and road hazards. </p><p>To help with investigating data leakage incidents in service-oriented architecture, integrity of provenance data needs to be guaranteed. For that purpose, I integrate WAXEDPRUNE with IBM Hyperledger Fabric blockchain network, so that every data access, transfer or update is recorded in a public blockchain ledger, is non-repudiatable and can be verified at any time in the future. The work on this project, called “Blockhub,” is in progress.</p>
67

Bezpečnost jako významný faktor rozvoje cestovního ruchu v České republice / Safety as an important factor for tourism development in Czech Republic

Šteflová, Lucie January 2011 (has links)
The main objective of this diploma thesis is to analyse the tourism safety situation and the CzechTourism hypothesis: "Czech Republic is a safe destination" for a visit. It is focused on tourism safety and security questions and its main forms, various international safety analysis and reports with emphasis on the situation in Czech Republic. The study of CzechTourism is analysed by means of a direct survey among foreigners and its results lead to the potential development of tourism safety situation in Czech Republic. Finally, the evolution of a situation in Czech Republic is observed according to the different tourism and peace indicators and the direct dependence "safe country / number of arrivals" is investigated.
68

Real-time forecasting of dietary habits and user health using Federated Learning with privacy guarantees

Horchidan, Sonia-Florina January 2020 (has links)
Modern health self-monitoring devices and applications, such as Fitbit and MyFitnessPal, empower users to take concrete actions and set fitness and lifestyle goals based on their recorded trends and statistics. Predicting such trends is beneficial in the road of achieving long-time targets, as the individuals can adjust their diets and habits at any point to guarantee success. The design and implementation of such a system, which also respects user privacy, is the main objective of our work.This application is modelled as a time-series forecasting problem. Given the historical data of users, we aim to predict their eating and lifestyle habits in real-time. We apply the federated learning paradigm to our use-case be- cause of the highly-distributed nature of our data and the privacy concerns of such sensitive recorded information. However, federated learning from het- erogeneous sequences of data can be challenging, as even state-of-the-art ma- chine learning techniques for time-series forecasting can encounter difficulties when learning from very irregular data sequences. Specifically, in the pro- posed healthcare scenario, the machine learning algorithms might fail to cater to users with unique dietary patterns.In this work, we implement a two-step streaming clustering mechanism and group clients that exhibit similar eating and fitness behaviours. The con- ducted experiments prove that learning federatively in this context can achieve very high prediction accuracy, as our predictions are no more than 0.025% far from the ground truth value with respect to the range of each feature. Training separate models for each group of users is shown to be beneficial, especially in terms of the training time, but it is highly dependent on the parameters used for the models and the training process. Our experiments conclude that the configuration used for the general federated model cannot be applied to the clusters of data. However, a decrease in prediction error of more than 45% can be achieved, given the parameters are optimized for each case.Lastly, this work tackles the problem of data privacy by applying state-of- the-art differential privacy techniques. Our empirical study shows that noising the gradients sent to the server is unsuitable for small datasets and cancels out the benefits obtained by prior users’ clustering. On the other hand, noising the training data achieves remarkable results, obtaining a differential privacy level corresponding to an epsilon value of 0.1 with an increase in the observed mean absolute error by a factor of only 0.21. / Moderna apparater och applikationer för självövervakning av hälsa, som Fitbit och MyFitnessPal, ger användarna möjlighet att vidta konkreta åtgärder och sätta fitness- och livsstilsmål baserat på deras dokumenterade trender och statistik. Att förutsäga sådana trender är fördelaktigt för att uppnå långtidsmål, eftersom individerna kan anpassa sina dieter och vanor när som helst för att garantera framgång.Utformningen och implementeringen av ett sådant system, som dessutom respekterar användarnas integritet, är huvudmålet för vårt arbete. Denna appli- kation är modellerad som ett tidsserieprognosproblem. Med avseende på an- vändarnas historiska data är målet att förutsäga deras matvanor och livsstilsva- nor i realtid. Vi tillämpar det federerade inlärningsparadigmet på vårt använd- ningsfall på grund av den mycket distribuerade karaktären av vår data och in- tegritetsproblemen för sådan känslig bokförd information. Federerade lärande från heterogena datasekvenser kan emellertid vara utmanande, eftersom även de modernaste maskininlärningstekniker för tidsserieprognoser kan stöta på svårigheter när de lär sig från mycket oregelbundna datasekvenser. Specifikt i det föreslagna sjukvårdsscenariot kan maskininlärningsalgoritmerna misslyc- kas med att förse användare med unika dietmönster.I detta arbete implementerar vi en tvåstegsströmmande klustermekanism och grupperar användare som uppvisar liknande ät- och fitnessbeteenden. De genomförda experimenten visar att federerade lärande i detta sammanhang kan uppnå mycket hög nogrannhet i förutsägelse, eftersom våra förutsägelser in- te är mer än 0,025% ifrån det sanna värdet med avseende på intervallet för varje funktion. Träning av separata modeller för varje grupp användare visar sig vara fördelaktigt, särskilt gällande träningstiden, men det är mycket be- roende av parametrarna som används för modellerna och träningsprocessen. Våra experiment drar slutsatsen att konfigurationen som används för den all- männa federerade modellen inte kan tillämpas på dataklusterna. Dock kan en minskning av förutsägelsefel på mer än 45% uppnås, givet att parametrarna är optimerade för varje fall.Slutligen hanteras problemet med datasekretess genom att tillämpa bästa tillgängliga differentiell integritetsteknik. Vår empiriska studie visar att adde- ra brus till gradienter som skickas till servern är olämpliga för liten data och avbryter fördelarna med tidigare användares kluster. Däremot, genom att ad- dera brus till träningsdata uppnås anmärkningsvärda resultat. En differentierad integritetsnivå motsvarande ett epsilonvärde på 0,1 med en ökning av det ob- serverade genomsnittliga absoluta felet med en faktor på endast 0,21 erhölls.
69

Privacy enforcement with data owner-defined policies

Scheffler, Thomas January 2013 (has links)
This thesis proposes a privacy protection framework for the controlled distribution and use of personal private data. The framework is based on the idea that privacy policies can be set directly by the data owner and can be automatically enforced against the data user. Data privacy continues to be a very important topic, as our dependency on electronic communication maintains its current growth, and private data is shared between multiple devices, users and locations. The growing amount and the ubiquitous availability of personal private data increases the likelihood of data misuse. Early privacy protection techniques, such as anonymous email and payment systems have focused on data avoidance and anonymous use of services. They did not take into account that data sharing cannot be avoided when people participate in electronic communication scenarios that involve social interactions. This leads to a situation where data is shared widely and uncontrollably and in most cases the data owner has no control over further distribution and use of personal private data. Previous efforts to integrate privacy awareness into data processing workflows have focused on the extension of existing access control frameworks with privacy aware functions or have analysed specific individual problems such as the expressiveness of policy languages. So far, very few implementations of integrated privacy protection mechanisms exist and can be studied to prove their effectiveness for privacy protection. Second level issues that stem from practical application of the implemented mechanisms, such as usability, life-time data management and changes in trustworthiness have received very little attention so far, mainly because they require actual implementations to be studied. Most existing privacy protection schemes silently assume that it is the privilege of the data user to define the contract under which personal private data is released. Such an approach simplifies policy management and policy enforcement for the data user, but leaves the data owner with a binary decision to submit or withhold his or her personal data based on the provided policy. We wanted to empower the data owner to express his or her privacy preferences through privacy policies that follow the so-called Owner-Retained Access Control (ORAC) model. ORAC has been proposed by McCollum, et al. as an alternate access control mechanism that leaves the authority over access decisions by the originator of the data. The data owner is given control over the release policy for his or her personal data, and he or she can set permissions or restrictions according to individually perceived trust values. Such a policy needs to be expressed in a coherent way and must allow the deterministic policy evaluation by different entities. The privacy policy also needs to be communicated from the data owner to the data user, so that it can be enforced. Data and policy are stored together as a Protected Data Object that follows the Sticky Policy paradigm as defined by Mont, et al. and others. We developed a unique policy combination approach that takes usability aspects for the creation and maintenance of policies into consideration. Our privacy policy consists of three parts: A Default Policy provides basic privacy protection if no specific rules have been entered by the data owner. An Owner Policy part allows the customisation of the default policy by the data owner. And a so-called Safety Policy guarantees that the data owner cannot specify disadvantageous policies, which, for example, exclude him or her from further access to the private data. The combined evaluation of these three policy-parts yields the necessary access decision. The automatic enforcement of privacy policies in our protection framework is supported by a reference monitor implementation. We started our work with the development of a client-side protection mechanism that allows the enforcement of data-use restrictions after private data has been released to the data user. The client-side enforcement component for data-use policies is based on a modified Java Security Framework. Privacy policies are translated into corresponding Java permissions that can be automatically enforced by the Java Security Manager. When we later extended our work to implement server-side protection mechanisms, we found several drawbacks for the privacy enforcement through the Java Security Framework. We solved this problem by extending our reference monitor design to use Aspect-Oriented Programming (AOP) and the Java Reflection API to intercept data accesses in existing applications and provide a way to enforce data owner-defined privacy policies for business applications. / Im Rahmen der Dissertation wurde ein Framework für die Durchsetzung von Richtlinien zum Schutz privater Daten geschaffen, welches darauf setzt, dass diese Richtlinien oder Policies direkt von den Eigentümern der Daten erstellt werden und automatisiert durchsetzbar sind. Der Schutz privater Daten ist ein sehr wichtiges Thema im Bereich der elektronischen Kommunikation, welches durch die fortschreitende Gerätevernetzung und die Verfügbarkeit und Nutzung privater Daten in Onlinediensten noch an Bedeutung gewinnt. In der Vergangenheit wurden verschiedene Techniken für den Schutz privater Daten entwickelt: so genannte Privacy Enhancing Technologies. Viele dieser Technologien arbeiten nach dem Prinzip der Datensparsamkeit und der Anonymisierung und stehen damit der modernen Netznutzung in Sozialen Medien entgegen. Das führt zu der Situation, dass private Daten umfassend verteilt und genutzt werden, ohne dass der Datenbesitzer gezielte Kontrolle über die Verteilung und Nutzung seiner privaten Daten ausüben kann. Existierende richtlinienbasiert Datenschutztechniken gehen in der Regel davon aus, dass der Nutzer und nicht der Eigentümer der Daten die Richtlinien für den Umgang mit privaten Daten vorgibt. Dieser Ansatz vereinfacht das Management und die Durchsetzung der Zugriffsbeschränkungen für den Datennutzer, lässt dem Datenbesitzer aber nur die Alternative den Richtlinien des Datennutzers zuzustimmen, oder keine Daten weiterzugeben. Es war daher unser Ansatz die Interessen des Datenbesitzers durch die Möglichkeit der Formulierung eigener Richtlinien zu stärken. Das dabei verwendete Modell zur Zugriffskontrolle wird auch als Owner-Retained Access Control (ORAC) bezeichnet und wurde 1990 von McCollum u.a. formuliert. Das Grundprinzip dieses Modells besteht darin, dass die Autorität über Zugriffsentscheidungen stets beim Urheber der Daten verbleibt. Aus diesem Ansatz ergeben sich zwei Herausforderungen. Zum einen muss der Besitzer der Daten, der Data Owner, in die Lage versetzt werden, aussagekräftige und korrekte Richtlinien für den Umgang mit seinen Daten formulieren zu können. Da es sich dabei um normale Computernutzer handelt, muss davon ausgegangen werden, dass diese Personen auch Fehler bei der Richtlinienerstellung machen. Wir haben dieses Problem dadurch gelöst, dass wir die Datenschutzrichtlinien in drei separate Bereiche mit unterschiedlicher Priorität aufteilen. Der Bereich mit der niedrigsten Priorität definiert grundlegende Schutzeigenschaften. Der Dateneigentümer kann diese Eigenschaften durch eigene Regeln mittlerer Priorität überschrieben. Darüber hinaus sorgt ein Bereich mit Sicherheitsrichtlinien hoher Priorität dafür, dass bestimmte Zugriffsrechte immer gewahrt bleiben. Die zweite Herausforderung besteht in der gezielten Kommunikation der Richtlinien und deren Durchsetzung gegenüber dem Datennutzer (auch als Data User bezeichnet). Um die Richtlinien dem Datennutzer bekannt zu machen, verwenden wir so genannte Sticky Policies. Das bedeutet, dass wir die Richtlinien über eine geeignete Kodierung an die zu schützenden Daten anhängen, so dass jederzeit darauf Bezug genommen werden kann und auch bei der Verteilung der Daten die Datenschutzanforderungen der Besitzer erhalten bleiben. Für die Durchsetzung der Richtlinien auf dem System des Datennutzers haben wir zwei verschiedene Ansätze entwickelt. Wir haben einen so genannten Reference Monitor entwickelt, welcher jeglichen Zugriff auf die privaten Daten kontrolliert und anhand der in der Sticky Policy gespeicherten Regeln entscheidet, ob der Datennutzer den Zugriff auf diese Daten erhält oder nicht. Dieser Reference Monitor wurde zum einen als Client-seitigen Lösung implementiert, die auf dem Sicherheitskonzept der Programmiersprache Java aufsetzt. Zum anderen wurde auch eine Lösung für Server entwickelt, welche mit Hilfe der Aspekt-orientierten Programmierung den Zugriff auf bestimmte Methoden eines Programms kontrollieren kann. In dem Client-seitigen Referenzmonitor werden Privacy Policies in Java Permissions übersetzt und automatisiert durch den Java Security Manager gegenüber beliebigen Applikationen durchgesetzt. Da dieser Ansatz beim Zugriff auf Daten mit anderer Privacy Policy den Neustart der Applikation erfordert, wurde für den Server-seitigen Referenzmonitor ein anderer Ansatz gewählt. Mit Hilfe der Java Reflection API und Methoden der Aspektorientierten Programmierung gelang es Datenzugriffe in existierenden Applikationen abzufangen und erst nach Prüfung der Datenschutzrichtlinie den Zugriff zuzulassen oder zu verbieten. Beide Lösungen wurden auf ihre Leistungsfähigkeit getestet und stellen eine Erweiterung der bisher bekannten Techniken zum Schutz privater Daten dar.
70

Data ownership and interoperability for a decentralized social semantic web

SAMBRA, Andrei Vlad 19 November 2013 (has links) (PDF)
Ensuring personal data ownership and interoperability for decentralized social Web applications is currently a debated topic, especially when taking into consideration the aspects of privacy and access control. Since the user's data are such an important asset of the current business models for most social Websites, companies have no incentive to share data among each other or to offer users real ownership of their own data in terms of control and transparency of data usage. We have concluded therefore that it is important to improve the social Web in such a way that it allows for viable business models while still being able to provide increased data ownership and data interoperability compared to the current situation. To this regard, we have focused our research on three different topics: identity, authentication and access control. First, we tackle the subject of decentralized identity by proposing a new Web standard called "Web Identity and Discovery" (WebID), which offers a simple and universal identification mechanism that is distributed and openly extensible. Next, we move to the topic of authentication where we propose WebID-TLS, a decentralized authentication protocol that enables secure, efficient and user friendly authentication on the Web by allowing people to login using client certificates and without relying on Certification Authorities. We also extend the WebID-TLS protocol, offering delegated authentication and access delegation. Finally we present our last contribution, the Social Access Control Service, which serves to protect the privacy of Linked Data resources generated by users (e.g. pro le data, wall posts, conversations, etc.) by applying two social metrics: the "social proximity distance" and "social contexts"

Page generated in 0.0528 seconds