• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1244
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2395
  • 646
  • 560
  • 523
  • 511
  • 352
  • 333
  • 308
  • 299
  • 238
  • 235
  • 218
  • 211
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Protecting Online Privacy in the Digital Age: Carpenter v. United States and the Fourth Amendment's Third-Party Doctrine

Del Rosso, Cristina 01 January 2019 (has links)
The intent of this thesis is to examine the future of the third-party doctrine with the proliferation of technology and the online data we are surrounded with daily, specifically after the United States Supreme Court's decision in Carpenter v. United States. In order to better understand the Supreme Court's reasoning in that case, this thesis will review the history of the third-party doctrine and its roots in United States v. Miller and Smith v. Maryland. A review of Fourth Amendment history and jurisprudence is also crucial to this thesis, as it is imperative that individuals do not forfeit their Constitutional guarantees for the benefit of living in a technologically advanced society. This requires an understanding of the modern-day functional equivalents of "papers" and "effects." Furthermore, this thesis will ultimately answer the following question: Why is it legally significant that we protect at least some data that comes from technologies that our forefathers could have never imagined under the Fourth Amendment? Looking to the future, this thesis will contemplate solutions on how to move forward in this technology era. It will scrutinize the relevancy of the third-party doctrine due to the rise of technology and the enormous amount of information held about us by third parties. In the past, the Third-Party Doctrine may have been good law, but that time has passed. It is time for the Third-Party Doctrine to be abolished so the Fourth Amendment can join the 21st Century.
432

Effects of Social Network Sites on Social Capital and Awareness of Privacy: A Study of Chinese and U.S. College Students' Usage of Social Network Sites

Sun, Tianyi January 2014 (has links)
No description available.
433

Den personliga integriteten och säkerheten i Internet of Sports

Röstin, Simon, Persson, Patrik January 2017 (has links)
Intresset för den personliga hälsan ökar inom samtliga sociala grupper. Människor vill få utökad kontroll över hur sin hälsosituation och de tar till allt fler hjälpmedel för att kunna få bättre svar. Med det digitala samhället nära till hands dyker det upp allt fler tjänster och produkter som agerar hjälpmedel för att produktens användare ska kunna få en större och bättre kontroll över sin hälsa. Produkter och tjänster som gör detta ingår i området Internet of Sports. I samband med att fler användare ansluter sig till dessa tjänster och produkter ökar därmed också datamängden de samlar in. Skyddas denna data i överföringen mellan användaren och företagen och skyddar de som samlar in datan användarens personliga integritet? Uppsatsens syfte är att undersöka detta genom att granska utvalda företag som verkar inom Internet of Sports och se om det går att komma över användarnas personliga data genom man in the middle-attacker. / The interest for personal health is growing in all demographic groups. People want better control regarding their personal health and they are using more aids to get better answers. With the digital society close to hand new products and services are appearing to aid users to get a better knowledge and control of their personal health. The products and services that aim to do this are categorized as Internet of Sports. As more users are signing up for and using these products and services the gathering of data is growing. Is the data that these companies gather safely transfered from the user to the companies and are the companies protecting the user’s privacy? The thesis’ purpose is to examine chosen companies within Internet of Sports and to see if it is possible to access the user’s personal data through man in the middle attacks.
434

A Study on Private and Secure Federated Learning / プライベートで安全な連合学習

Kato, Fumiyuki 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第25427号 / 情博第865号 / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 伊藤 孝行, 教授 黒田 知宏, 教授 岡部 寿男, 吉川 正俊(京都大学 名誉教授) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
435

Differentially Private Federated Learning Algorithms for Sparse Basis Recovery

Ajinkya K Mulay (18823252) 14 June 2024 (has links)
<p dir="ltr">Sparse basis recovery is an important learning problem when the number of model dimensions (<i>p</i>) is much larger than the number of samples (<i>n</i>). However, there has been little work that studies sparse basis recovery in the Federated Learning (FL) setting, where the Differential Privacy (DP) of the client data must also be simultaneously protected. Notably, the performance guarantees of existing DP-FL algorithms (such as DP-SGD) will degrade significantly when the system is ill-determined (i.e., <i>p >> n</i>), and thus they will fail to accurately learn the true underlying sparse model. The goal of my thesis is therefore to develop DP-FL sparse basis recovery algorithms that can recover the true underlying sparse basis provably accurately even when <i>p >> n</i>, yet still guaranteeing the differential privacy of the client data.</p><p dir="ltr">During my PhD studies, we developed three DP-FL sparse basis recovery algorithms for this purpose. Our first algorithm, SPriFed-OMP, based on the Orthogonal Matching Pursuit (OMP) algorithm, can achieve high accuracy even when <i>n = O(\sqrt{p})</i> under the stronger Restricted Isometry Property (RIP) assumption for least-square problems. Our second algorithm, Humming-Bird, based on a carefully modified variant of the Forward-Backward Algorithm (FoBA), can achieve differentially private sparse recovery for the same setup while requiring the much weaker Restricted Strong Convexity (RSC) condition. We further extend Humming-Bird to support loss functions beyond least-square satisfying the RSC condition. To the best of our knowledge, these are the first DP-FL results guaranteeing sparse basis recovery in the <i>p >> n</i> setting.</p>
436

Consumer Motivation and the Privacy Paradox

Merians Penaloza, Diane, 0000-0002-1362-4192 05 1900 (has links)
There is a gap between intention and action that people experience when faced with protecting their digital data privacy. Known as the privacy paradox, it is the idea that what a person says they believe (protecting their data privacy is paramount) is not reflective of how they act (relinquishing their data privacy). In other words, what people express about their data privacy is often in opposition to the frequency with which they relinquish their data privacy. The research intends to examine the privacy paradox and consists of two studies, one qualitative and one quantitative. First, focus groups were held, the outcome of which was an attempt at the creation of a typology of words and phrases that consumers use relative to their data privacy. Second, an experiment using Likert scales and Pareto-optimal choice-based conjoint analysis was created based on the typology created in study one, giving insight into what consumers feel are motivators towards protecting or relinquishing their data privacy. The contribution is filling a gap in the existing literature related to the privacy paradox through an analysis of behavior. / Business Administration/Marketing
437

Um modelo de negociação de privacidade para sistemas de recomendação social

Rocha, Ânderson Kanegae Soares 27 February 2015 (has links)
Made available in DSpace on 2016-06-02T19:06:22Z (GMT). No. of bitstreams: 1 6770.pdf: 4215500 bytes, checksum: 31340bf5dc86076ef8911622315ba83c (MD5) Previous issue date: 2015-02-27 / Financiadora de Estudos e Projetos / The high rate of growth and variety of information available on the Internet can overwhelm users, not leading them to the best decisions. In this context, social recommender systems play an important role on helping users against the effects of information overload. However, these systems need for data collection from its users social context motivates privacy concerns and may discourage its use. Thus, this dissertation presents a privacy negotiation model for social recommender systems to enable user to control his own privacy from the perspective of computer science. So, the user can decide to provide access to their data considering the personalization benefits that the system can offer him in exchange and is not forced to fully accept the privacy policies though. In this model, the privacy control is possible by means of a user interface design pattern using privacy negotiation techniques. The SocialRecSys social recommender system is an implementation of this model that was used in an evaluation with 32 users. The results showed that users are not satisfied with traditional interfaces and the model can better deal with the potentially different privacy preferences of each user. The results also indicated the high usability of the user interfaces of this model, which increase the flexibility of the systems regarding the configuration options of privacy preferences without harm the usage easiness of it. The implementation of this model shows that this is an alternative to reduce the concerns of privacy of social recommender systems users by increasing the flexibility and providing them a better understanding of the recommender systems. So users can feel encouraged to share their data in social recommender systems and take advantage of its personalization benefits. / A alta taxa de crescimento e variedade de informações disponíveis na Internet podem sobrecarregar os usuários, levando-os a não tomar as melhores decisões. Nesse contexto, os sistemas de recomendação social desempenham um importante papel ao auxiliar os usuários contra os efeitos da sobrecarga de informação. No entanto, a necessidade desses sistemas de coletar dados do contexto social dos seus usuários motiva preocupações de privacidade e pode desencorajar o seu uso. Assim, esta dissertação apresenta um modelo de negociação de privacidade para sistemas de recomendação social visando possibilitar ao usuário o controle de sua própria privacidade sob a perspectiva da ciência da computação. Desse modo o usuário pode decidir fornecer acesso aos seus dados considerando os benefícios de personalização que o sistema pode lhe oferecer em troca e ele não é obrigado a aceitar completamente as politicas de privacidade. Nesse modelo, o controle de privacidade é possível por meio de um padrão de projeto de interface de usuário que faz uso de técnicas de negociação de privacidade. O sistema de recomendação social SocialRecSys é uma implementação desse modelo e foi utilizado em uma avaliação com 32 usuários. Os resultados evidenciaram que os usuários não estão satisfeitos com as interfaces tradicionais e que o modelo apresentado pode tratar melhor as potencialmente diferentes preferências de privacidade de cada usuário. Os resultados também indicam a alta usabilidade das interfaces de usuário desse modelo. São interfaces que aumentam a flexibilidade dos sistemas em relação às opções de configuração de preferências de privacidade, sem tornar mais complexo o uso desses sistemas. A implementação do modelo proposto se mostra uma alternativa para reduzir as preocupações com privacidade dos usuários de sistemas de recomendação social, aumentando a flexibilidade e provendo aos usuários maior entendimento desses sistemas. Assim, os usuários podem se sentir encorajados a compartilhar seus dados com os sistemas de recomendação social e desfrutar de seus benefícios de personalização.
438

Varför beter jag mig såhär? : En studie om hur attityd och kunskap kring datainsamling kan påverka online-beteende

Rezai, Farhad, Ånimmer, Pontus January 2019 (has links)
During the 21st century, the use of digital services has exploded. Companies are getting better at collecting and using data for their own gain. The increased collection of data has contributed to increased privacy concerns among many. This paper examines how a person’s attitude and knowledge about data collection can affect their online behavior. To examine this, a questionnaire survey was conducted. The study's results indicate that there is a paradoxical behavior among the respondents, since most people have a negative view of data collection, but do not take measures to reflect this attitude. Furthermore, the result suggests that knowledge about data collection acts as the primary motivator for taking measures to protect their data.   Keywords: Data collection, online behaviour, privacy paradox, GDPR, attitude, knowledge, privacy concerns / Under 2000-talet har användningen av digitala tjänster exploderat. Företag blir bättre på att samla in och använda data för deras egen vinning. Den ökade datainsamlingen har bidragit till en ökad oro kring personlig data hos många. Denna uppsats undersöker hur internetanvändares attityd och kunskap kring datainsamling kan påverka deras online-beteende. För att undersöka detta utfördes en enkätundersökning. Studiens resultat indikerar att det finns ett paradoxalt beteende hos respondenterna då de flesta har en negativ syn till datainsamling men vidtar inte åtgärder för att spegla denna attityd. Vidare tyder resultatet på att kunskap kring datainsamling agerar som den primära motivatorn för att vidta åtgärder för att skydda deras data.   Nyckelord: Datainsamling, online-beteende, Privacy Paradox, GDPR, attityd, kunskap. integritets-oro
439

Nymbler: Privacy-enhanced Protection from Abuses of Anonymity

Henry, Ryan January 2010 (has links)
Anonymous communications networks help to solve the real and important problem of enabling users to communicate privately over the Internet. However, by doing so, they also introduce an entirely new problem: How can service providers on the Internet---such as websites, IRC networks and mail servers---allow anonymous access while protecting themselves against abuse by misbehaving anonymous users? Recent research efforts have focused on using anonymous blacklisting systems (also known as anonymous revocation systems) to solve this problem. As opposed to revocable anonymity systems, which enable some trusted third party to deanonymize users, anonymous blacklisting systems provide a way for users to authenticate anonymously with a service provider, while enabling the service provider to revoke access from individual misbehaving anonymous users without revealing their identities. The literature contains several anonymous blacklisting systems, many of which are impractical for real-world deployment. In 2006, however, Tsang et al. proposed Nymble, which solves the anonymous blacklisting problem very efficiently using trusted third parties. Nymble has inspired a number of subsequent anonymous blacklisting systems. Some of these use fundamentally different approaches to accomplish what Nymble does without using third parties at all; so far, these proposals have all suffered from serious performance and scalability problems. Other systems build on the Nymble framework to reduce Nymble's trust assumptions while maintaining its highly efficient design. The primary contribution of this thesis is a new anonymous blacklisting system built on the Nymble framework---a nimbler version of Nymble---called Nymbler. We propose several enhancements to the Nymble framework that facilitate the construction of a scheme that minimizes trust in third parties. We then propose a new set of security and privacy properties that anonymous blacklisting systems should possess to protect: 1) users' privacy against malicious service providers and third parties (including other malicious users), and 2) service providers against abuse by malicious users. We also propose a set of performance requirements that anonymous blacklisting systems should meet to maximize their potential for real-world adoption, and formally define some optional features in the anonymous blacklisting systems literature. We then present Nymbler, which improves on existing Nymble-like systems by reducing the level of trust placed in third parties, while simultaneously providing stronger privacy guarantees and some new functionality. It avoids dependence on trusted hardware and unreasonable assumptions about non-collusion between trusted third parties. We have implemented all key components of Nymbler, and our measurements indicate that the system is highly practical. Our system solves several open problems in the anonymous blacklisting systems literature, and makes use of some new cryptographic constructions that are likely to be of independent theoretical interest.
440

Nymbler: Privacy-enhanced Protection from Abuses of Anonymity

Henry, Ryan January 2010 (has links)
Anonymous communications networks help to solve the real and important problem of enabling users to communicate privately over the Internet. However, by doing so, they also introduce an entirely new problem: How can service providers on the Internet---such as websites, IRC networks and mail servers---allow anonymous access while protecting themselves against abuse by misbehaving anonymous users? Recent research efforts have focused on using anonymous blacklisting systems (also known as anonymous revocation systems) to solve this problem. As opposed to revocable anonymity systems, which enable some trusted third party to deanonymize users, anonymous blacklisting systems provide a way for users to authenticate anonymously with a service provider, while enabling the service provider to revoke access from individual misbehaving anonymous users without revealing their identities. The literature contains several anonymous blacklisting systems, many of which are impractical for real-world deployment. In 2006, however, Tsang et al. proposed Nymble, which solves the anonymous blacklisting problem very efficiently using trusted third parties. Nymble has inspired a number of subsequent anonymous blacklisting systems. Some of these use fundamentally different approaches to accomplish what Nymble does without using third parties at all; so far, these proposals have all suffered from serious performance and scalability problems. Other systems build on the Nymble framework to reduce Nymble's trust assumptions while maintaining its highly efficient design. The primary contribution of this thesis is a new anonymous blacklisting system built on the Nymble framework---a nimbler version of Nymble---called Nymbler. We propose several enhancements to the Nymble framework that facilitate the construction of a scheme that minimizes trust in third parties. We then propose a new set of security and privacy properties that anonymous blacklisting systems should possess to protect: 1) users' privacy against malicious service providers and third parties (including other malicious users), and 2) service providers against abuse by malicious users. We also propose a set of performance requirements that anonymous blacklisting systems should meet to maximize their potential for real-world adoption, and formally define some optional features in the anonymous blacklisting systems literature. We then present Nymbler, which improves on existing Nymble-like systems by reducing the level of trust placed in third parties, while simultaneously providing stronger privacy guarantees and some new functionality. It avoids dependence on trusted hardware and unreasonable assumptions about non-collusion between trusted third parties. We have implemented all key components of Nymbler, and our measurements indicate that the system is highly practical. Our system solves several open problems in the anonymous blacklisting systems literature, and makes use of some new cryptographic constructions that are likely to be of independent theoretical interest.

Page generated in 0.0424 seconds