Spelling suggestions: "subject:"deep makes"" "subject:"deep takes""
1 |
Bör du v(AR)a rädd för framtiden? : En studie om The Privacy Paradox och potentiella integritetsrisker med Augmented Reality / Should you be sc(AR)ed of the future? : A study about The Privacy Paradox and potential risks with Augmented RealityMadsen, Angelica, Nymanson, Carl January 2021 (has links)
I en tid där digitaliseringen är mer utbredd än någonsin ökar också mängden data som samlas och delas online. I takt med att nya tekniker utvecklas öppnas det upp för nya utmaningar för integritetsfrågor. En aktiv användare online ägnar sig med största sannolikhet också åt ett eller flera sociala medier, där ändamålen ofta innebär att dela med sig av information till andra. Eftersom tekniken Augmented Reality används mer frekvent i några av de största sociala medieapplikationerna blev studiens syfte att undersöka potentiella integritetsproblem med Augmented Reality. Studiens tillvägagångssätt har bestått av en empirisk datainsamling för att skapa ett teoretiskt ramverk för studien. Utifrån detta har det genomförts en digital enkät samt intervjuer för att närmare undersöka användarens beteende online och The Privacy Paradox. Utifrån undersökningens resultat kunde The Privacy Paradox bekräftas och ge en bättre förståelse för hur användaren agerar genom digitala kanaler. I studien behandlas olika aspekter kring integritetsfrågor såsom användarvillkor, sekretessavtal, datamäklare, framtida konsekvenser och vad tekniken möjliggör. Studien kommer fram till att användare, företaget och dagens teknik tillåter att en känsligare information kan utvinnas genom ett dataintrång. Även om det ännu inte har inträffat ett dataintrång som grundat sig i Augmented Reality före denna studie, finns det en risk att det endast handlar om en tidsfråga innan detta sker. / In a time when digitalization is more widespread than ever, the amount of data collected and shared is increasing. As new technologies develop, challenges for privacy concerns arises. An active online user is likely to engage in one or many social media platforms, where the purpose often involves sharing information with others. Since Augmented Reality is more frequently supported in some of the biggest social media applications, the purpose of this study was to investigate potential privacy concerns with Augmented Reality. The study’s approach consisted of an empirical data collection to create a theoretical framework for the study. Based on this, a digital survey and interviews were conducted to further investigate the user's behavior online and The Privacy Paradox. Based on the results of the survey, The Privacy Paradox could be confirmed and a better understanding of how the user interacts through digital channels was achieved. The study treats different aspects of privacy concerns such as user terms, privacy policies, data brokers, future consequences and what technology enables. The study reached the conclusion that users, businesses and today's technology allow a more sensitive type of information to be collected through a data breach. Even if there has not yet occurred a data breach enabled by Augmented Reality prior to this study, there is a risk that it is only a matter of time until this happens.
|
2 |
Media Forensics Using Machine Learning ApproachesDavid Güera (7534550) 30 October 2019 (has links)
<div>Consumer-grade imaging sensors have become ubiquitous in the past decade. Images and videos, collected from such sensors are used by many entities for public and private communications, including publicity, advocacy, disinformation, and deception. </div><div>In this thesis, we present tools to be able to extract knowledge from and understand this imagery and its provenance. Many images and videos are modified and/or manipulated prior to their public release. We also propose a set of forensics and counter-forensic techniques to determine the integrity of this multimedia content and modify it in specific ways to deceive adversaries. The presented tools are evaluated using publicly available datasets and independently organized challenges.</div>
|
3 |
Anonymizing Faces without Destroying InformationRosberg, Felix January 2024 (has links)
Anonymization is a broad term. Meaning that personal data, or rather data that identifies a person, is redacted or obscured. In the context of video and image data, the most palpable information is the face. Faces barely change compared to other aspect of a person, such as cloths, and we as people already have a strong sense of recognizing faces. Computers are also adroit at recognizing faces, with facial recognition models being exceptionally powerful at identifying and comparing faces. Therefore it is generally considered important to obscure the faces in video and image when aiming for keeping it anonymized. Traditionally this is simply done through blurring or masking. But this de- stroys useful information such as eye gaze, pose, expression and the fact that it is a face. This is an especial issue, as today our society is data-driven in many aspects. One obvious such aspect is autonomous driving and driver monitoring, where necessary algorithms such as object-detectors rely on deep learning to function. Due to the data hunger of deep learning in conjunction with society’s call for privacy and integrity through regulations such as the General Data Protection Regularization (GDPR), anonymization that preserve useful information becomes important. This Thesis investigates the potential and possible limitation of anonymizing faces without destroying the aforementioned useful information. The base approach to achieve this is through face swapping and face manipulation, where the current research focus on changing the face (or identity) while keeping the original attribute information. All while being incorporated and consistent in an image and/or video. Specifically, will this Thesis demonstrate how target-oriented and subject-agnostic face swapping methodologies can be utilized for realistic anonymization that preserves attributes. Thru this, this Thesis points out several approaches that is: 1) controllable, meaning the proposed models do not naively changes the identity. Meaning that what kind of change of identity and magnitude is adjustable, thus also tunable to guarantee anonymization. 2) subject-agnostic, meaning that the models can handle any identity. 3) fast, meaning that the models is able to run efficiently. Thus having the potential of running in real-time. The end product consist of an anonymizer that achieved state-of-the-art performance on identity transfer, pose retention and expression retention while providing a realism. Apart of identity manipulation, the Thesis demonstrate potential security issues. Specifically reconstruction attacks, where a bad-actor model learns convolutional traces/patterns in the anonymized images in such a way that it is able to completely reconstruct the original identity. The bad-actor networks is able to do this with simple black-box access of the anonymization model by constructing a pair-wise dataset of unanonymized and anonymized faces. To alleviate this issue, different defense measures that disrupts the traces in the anonymized image was investigated. The main take away from this, is that naively using what qualitatively looks convincing of hiding an identity is not necessary the case at all. Making robust quantitative evaluations important.
|
Page generated in 0.0442 seconds