Spelling suggestions: "subject:"ehe privacy paradox"" "subject:"ehe privacy aparadox""
31 |
Beyond Privacy Concerns: Examining Individual Interest in Privacy in the Machine Learning EraBrown, Nicholas James 12 June 2023 (has links)
The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We use the enhanced APCO model as the theoretical lens to investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making. In a scenario-based experiment with 499 participants, we present various company privacy policies to participants to examine their trust and privacy considerations, then ask them to share reasons why they would or would not opt in to share their voice data to train a companies' voice recognition software. We find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns, which thereby mediate their decisions to share their voice data. Furthermore, we manipulate four factors of a privacy policy to operationalize various cognitive biases actively present in the minds of consumers and find that default trust and salience biases significantly affect participants' privacy decision making. Our results provide a deeper contextualized understanding of privacy-related concerns that may arise in human-augmented ML system configurations and highlight the managerial importance of considering the role of human involvement in supervised machine learning settings. Importantly, we introduce perceived human involvement as a new construct to the information privacy discourse.
Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. Researchers refer to this as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. Often, privacy concerns are situational and can be elicited through the setup of boundary conditions and the framing of different privacy scenarios. Drawing on the cognitive model of empowerment and interest, we propose a multidimensional privacy interest construct that captures consumers' situational and dispositional attitudes toward privacy, which can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. This construct comprises four dimensions—impact, awareness, meaningfulness, and competence—and is conceptualized as a consumer's assessment of contextual factors affecting their privacy perceptions and their global predisposition to respond to those factors. Importantly, interest was originally included in the privacy calculus but is largely absent in privacy studies and theoretical conceptualizations. Following MacKenzie et al. (2011), we developed and empirically validated a privacy interest scale. This study contributes to privacy research and practice by reconceptualizing a construct in the original privacy calculus theory and offering a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors. / Doctor of Philosophy / The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making and find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns. This thereby influences their decisions to share their voice data. Our results highlight the importance of understanding consumers' willingness to contribute their data to generate complete and diverse data sets to help companies reduce algorithmic biases and systematic unfairness in the decisions and outputs rendered by ML systems.
Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. This is referred to as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. We propose privacy interest as an alternative to privacy concern and assert that it can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. We found that privacy interest was more effective than privacy concern in predicting consumers' mobilization behaviors, such as publicly complaining about privacy issues to companies and third-party organizations, requesting to remove their information from company databases, and reducing their self-disclosure behaviors. By contrast, privacy concern was more effective than privacy interest in predicting consumers' behaviors to misrepresent their identity. By developing and empirically validating the privacy interest scale, we offer interest in privacy as a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors.
|
32 |
Hälsodata & smartklockor : En användarundersökning om medvetenhet och attitydApelthun, Henrietta, Anni, Töyrä January 2020 (has links)
Digital teknik och den ökade digitalisering som är under ständig utveckling i världen medför att strukturer i samhällen ändras och likaså vårt sätt att leva, då stora mängder data samlas in. Sverige är bland de ledande länderna i världen när det kommer till användning av ny teknik och varannan svensk har minst en uppkopplad enhet i sitt hem. Smartklockor som de senaste åren stigit i popularitet ger många möjligheter till insamling av hälsodata, vilket kan leda till integritetsproblem. Syftet med detta arbete är att undersöka attityder hos studenter (20–30 år) som använder smartklockor genom att utgå från teorin om integritetsparadoxen. Integritetsparadoxen menar att människors beteende grundar sig på ett risktänk, men det verkliga beteende, hur de agerar, istället grundar sig på tillit. För att besvara syftet har fem kvalitativa intervjuer genomförts där författarna sedan analyserat hur studenterna förhåller sig till risk, tillit, integritet och deras medvetenhet kring hälsodatadelning. Det visade sig att studenterna var positiva till datadelning men synen på integritet skiljde sig åt. Risk var något som studenterna sa sig tas i beaktande, men när studenterna väl fattade beslut, såsom att ladda ner en applikation, grundade sig utfallet på tillit. / Digital technology and the increasing digitalization that is constantly evolving worldwide means that structures in our society and our way of living are changing, when large amounts of data are being collected. Sweden is among the leading countries in the world when it comes to the use of new technology and every other swede has at least one connected device. Smartwatches have risen in popularity in recent years and provide many opportunities for the collection of health data, which for the users may lead to privacy problems. The purpose of this thesis is to research the attitudes of students (20-30 years old) using smartwatches based on the theory of the privacy paradox. The theory of the privacy paradox explores the relationship between the user’s behavioral intention based on the risks and their actual behavior based on trust. To fulfill the purpose the researchers conducted five qualitative interviews. The authors then analyzed the answers about how students relate to risk, trust, privacy and their awareness of sharing health data. It turned out that the students were optimistic about data sharing but the view of privacy differed. Risk was something that they said was taken into account but when the students actual decisions were made, such as downloading an application, the outcome was based on trust.
|
33 |
Integritet online ur olika generationers perspektiv : En studie om hur generation digital natives & pre-internet värdesätter sin integritet onlineKönig, Lovisa, Romney, Ellen January 2020 (has links)
Den digital utvecklingen har gjort att vi idag rör oss i digitala miljöer för att jobba, kommunicera, söka information, shoppa och underhållas. Företag kan idag kan spara, kartlägga, mäta och analysera privatpersoners aktiviteter online vilket ställt högre krav på näringsidkare att hantera denna personliga information korrekt. Ett exempel på detta är GDPR som infördes under 2018. Lagen har lyft frågan om integritet online och gjort gemene man mer medveten om informationsinsamlingen som pågår runt oss då företagen är skyldiga att informera om den. Syftet med denna studie är att se hur två olika grupper, de generationer som har växt upp med internet sen barnsben gentemot de generationer som har tagit till sig internet vid vuxen ålder, resonerar kring integritet online. Vi vill se vilka skillnader/likheter som finns mellan grupperna, ifall det finns faktorer utöver ålder som är avgörande och ifall det finns någon paradox mellan åsikter och agerande i praktiken. I slutändan ämnar vi kunna ge praktiska råd kring hur företag ska kunna hantera konsumenternas integritet online. För att undersöka detta har vi studerat tidigare forskning på området samt gjort fem djupintervjuer inom vardera grupp. Den teoretiska referensramen innehåller teorier om the privacy paradox, medvetenhet kring informationsinsamling, Communication Privacy Management och Customer Relationship Management. Även GDPR och riktad marknadsföring behandlas, vilket sammantaget har ställs i relation till det insamlade materialet från respondenterna. Därefter har vi besvarat vår problemformulering: “Hur resonerar generation digital natives i jämförelse med generation pre-internet kring företags hantering av deras integritet online?” Resultatet av studien visar att det viktigaste för bägge respondentgrupper var att det finns ett relevant och tydligt syfte för att de ska delge sin information, samt ifall de får någon typ av kompensation. Den största skillnaden gick att se i deras delgivningsnormer, där den yngre respondentgruppen kände större press att dela med sig på social media och därmed indirekt till företag. Den mest framstående skillnaden på individnivå gällde medvetenhet kring insamling, där det varierade från respondent till respondent. Bägge grupperna känner dock en oro inför framtiden, då de ofta känner sig övervakade online. Denna oro härleds ur en känsla av maktlöshet och okunskap kring hur de kan skydda sin data. I spår av maktlösheten har många skapat sig en fabricerad trygghet där de i brist på kunskap istället hoppas på att de ska skyddas av lagar, regleringar och att vara “en i mängden”. Utifrån våra slutsatser rekommenderar vi företag att informera konsumenterna när och varför insamling sker, ge kompensation i någon form samt skydda den data de innehar.
|
Page generated in 0.0565 seconds