• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 15
  • Tagged with
  • 35
  • 35
  • 17
  • 14
  • 14
  • 13
  • 12
  • 12
  • 11
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Attityder och handlande kring olika nivåer av AI-baserad marknadsföring : En kvantitativ studie / Attitudes and behaviour regarding different levels of AI-based marketing : A quantitative study

Stamming, Simon January 2022 (has links)
Syfte: Studien syftar till att förklara hur olika användare upplever de tre nivåerna av AI-genererat innehåll inom marknadsföring i företagsekonomi. Studien syftar även till att se om deras handlande går i linje med deras attityder eller om det finns utrymme för en paradox. Teoretisk bakgrund: I bakgrund och teori redogörs för begrepp som är relevanta för studien så som de olika nivåerna av AI, attityders beståndsdelar och flera modeller kopplat till handlande. Även begreppen integritetskalkylen och integritetsparadoxen behandlas samt påverkansfaktorerna upplevda fördelar, upplevt förtroende och upplevd risk.  Metod: En post-positivistisk och deduktiv ansats användes för studien i form av en kvantitativ surveyundersökning. Data samlades in genom en online-enkät som förmedlades genom sociala medier och genererade 113 svar. Resultatet analyserades med hjälp av univariata tabeller samt en regressionsanalys. Undersökningskvaliteten säkrades med hjälp av enkätfrågor förankrade i teori samt Cronbach’s Alpha och korstabeller. Resultat & Slutsatser: Resultatet av studien visar på att de tre faktorerna fördelar, förtroende och risk påverkar användares attityder kring alla nivåer av AI-baserad marknadsföring. Användarna hyste mest negativa attityder mot kännande AI och mest positiva mot mekanisk AI. Attityderna visade sig ha en påverkan på handlandet vilket inte bekräftar teorin om en paradox. Resultatet är inte applicerbart på populationen i stort och därför behövs vidare studier innan långtgående rekommendationer kan ges till managers. / Purpose: The study aims to explain how users experience the three levels of AI-based marketing within business administration. The study also aims to see if users' actions are in line with their attitudes or if a paradox is applicable.  Theoretical background: In the background and theory, concepts that are relevant to the study are described, such as the different levels of AI, components of attitudes and several models linked to behavior. The concepts of the privacy calculus and the privacy paradox are also addressed, as well as the influencing factors perceived benefits, perceived trust and perceived risk. Method: A post-positivist and deductive approach was used for the study in the form of a quantitative survey. Data was collected through an online questionnaire that was distributed through social media and generated 113 responses. The results were analyzed using univariate tables and a regression analysis. The research quality was ensured with the help of questionnaire questions rooted in theory as well as Cronbach's Alpha and crosstabs. Results & Conclusions: The results of the study show that the three factors benefits, trust and risk affect users' attitudes of all levels of AI-based marketing. Users had the most negative attitudes towards feeling AI and most positive towards mechanical AI. The attitudes showed to have an influence on the behavior, which does not confirm the theory of a paradox. The result is not applicable to the population as a whole and therefore further studies are needed before far-reaching recommendations can be given to managers.
32

The Privacy Club : An exploratory study of the privacy paradox in digital loyalty programs

Johansson, Lilly, Rystadius, Gustaf January 2022 (has links)
Background: Digital loyalty programs collect extensive personal data, but literature has so far neglected the aspect of privacy concerns within the programs. The privacy paradox denotes the contradictory behavior amongst consumers stating privacy risk beliefs and actual behavior. Existing literature is calling for a dual perspective of the privacy paradox and digital loyalty programs to find the underlying reasons for the contradictory behavior. Purpose: The purpose of this study was to explore (1) if and when privacy concerns existed in digital loyalty programs and (2) why consumers overruled their privacy concerns in digital loyalty programs. Method: A qualitative method with 18 semi-structured interviews were conducted through a non-probability purposive sampling of consumers within digital loyalty programs. The findings were then analyzed through a thematic analysis to finally construct a model based upon the given research purpose.  Conclusion: The findings suggest that consumers experience privacy concerns in digital loyalty programs from external exposure to privacy breaches and when consumers felt their mental construct of terms and conditions were violated. Four themes were found to influence why consumers overrule their privacy concerns and share personal data with digital loyalty programs, relating to cognitive biases, value of rewards received, and digital trust for the program provider. The findings were synthesized into a model illustrating the consumer assessment of personal data sharing in digital loyalty programs and the interconnection between the influences.
33

Beyond Privacy Concerns: Examining Individual Interest in Privacy in the Machine Learning Era

Brown, Nicholas James 12 June 2023 (has links)
The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We use the enhanced APCO model as the theoretical lens to investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making. In a scenario-based experiment with 499 participants, we present various company privacy policies to participants to examine their trust and privacy considerations, then ask them to share reasons why they would or would not opt in to share their voice data to train a companies' voice recognition software. We find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns, which thereby mediate their decisions to share their voice data. Furthermore, we manipulate four factors of a privacy policy to operationalize various cognitive biases actively present in the minds of consumers and find that default trust and salience biases significantly affect participants' privacy decision making. Our results provide a deeper contextualized understanding of privacy-related concerns that may arise in human-augmented ML system configurations and highlight the managerial importance of considering the role of human involvement in supervised machine learning settings. Importantly, we introduce perceived human involvement as a new construct to the information privacy discourse. Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. Researchers refer to this as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. Often, privacy concerns are situational and can be elicited through the setup of boundary conditions and the framing of different privacy scenarios. Drawing on the cognitive model of empowerment and interest, we propose a multidimensional privacy interest construct that captures consumers' situational and dispositional attitudes toward privacy, which can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. This construct comprises four dimensions—impact, awareness, meaningfulness, and competence—and is conceptualized as a consumer's assessment of contextual factors affecting their privacy perceptions and their global predisposition to respond to those factors. Importantly, interest was originally included in the privacy calculus but is largely absent in privacy studies and theoretical conceptualizations. Following MacKenzie et al. (2011), we developed and empirically validated a privacy interest scale. This study contributes to privacy research and practice by reconceptualizing a construct in the original privacy calculus theory and offering a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors. / Doctor of Philosophy / The deployment of human-augmented machine learning (ML) systems has become a recommended organizational best practice. ML systems use algorithms that rely on training data labeled by human annotators. However, human involvement in reviewing and labeling consumers' voice data to train speech recognition systems for Amazon Alexa, Microsoft Cortana, and the like has raised privacy concerns among consumers and privacy advocates. We investigate how the disclosure of human involvement during the supervised machine learning process affects consumers' privacy decision making and find that the perception of human involvement in the ML training process significantly influences participants' privacy-related concerns. This thereby influences their decisions to share their voice data. Our results highlight the importance of understanding consumers' willingness to contribute their data to generate complete and diverse data sets to help companies reduce algorithmic biases and systematic unfairness in the decisions and outputs rendered by ML systems. Although ubiquitous data collection and increased privacy breaches have elevated the reported concerns of consumers, consumers' behaviors do not always match their stated privacy concerns. This is referred to as the privacy paradox, and decades of information privacy research have identified a myriad of explanations why this paradox occurs. Yet the underlying crux of the explanations presumes privacy concern to be the appropriate proxy to measure privacy attitude and compare with actual privacy behavior. We propose privacy interest as an alternative to privacy concern and assert that it can serve as a more robust measure in conditions leading to the privacy paradox. We define privacy interest as a consumer's general feeling toward reengaging particular behaviors that increase their information privacy. We found that privacy interest was more effective than privacy concern in predicting consumers' mobilization behaviors, such as publicly complaining about privacy issues to companies and third-party organizations, requesting to remove their information from company databases, and reducing their self-disclosure behaviors. By contrast, privacy concern was more effective than privacy interest in predicting consumers' behaviors to misrepresent their identity. By developing and empirically validating the privacy interest scale, we offer interest in privacy as a renewed theoretical lens through which to view consumers' privacy attitudes and behaviors.
34

Hälsodata & smartklockor : En användarundersökning om medvetenhet och attityd

Apelthun, Henrietta, Anni, Töyrä January 2020 (has links)
Digital teknik och den ökade digitalisering som är under ständig utveckling i världen medför att strukturer i samhällen ändras och likaså vårt sätt att leva, då stora mängder data samlas in. Sverige är bland de ledande länderna i världen när det kommer till användning av ny teknik och varannan svensk har minst en uppkopplad enhet i sitt hem. Smartklockor som de senaste åren stigit i popularitet ger många möjligheter till insamling av hälsodata, vilket kan leda till integritetsproblem. Syftet med detta arbete är att undersöka attityder hos studenter (20–30 år) som använder smartklockor genom att utgå från teorin om integritetsparadoxen. Integritetsparadoxen menar att människors beteende grundar sig på ett risktänk, men det verkliga beteende, hur de agerar, istället grundar sig på tillit. För att besvara syftet har fem kvalitativa intervjuer genomförts där författarna sedan analyserat hur studenterna förhåller sig till risk, tillit, integritet och deras medvetenhet kring hälsodatadelning. Det visade sig att studenterna var positiva till datadelning men synen på integritet skiljde sig åt. Risk var något som studenterna sa sig tas i beaktande, men när studenterna väl fattade beslut, såsom att ladda ner en applikation, grundade sig utfallet på tillit. / Digital technology and the increasing digitalization that is constantly evolving worldwide means that structures in our society and our way of living are changing, when large amounts of data are being collected. Sweden is among the leading countries in the world when it comes to the use of new technology and every other swede has at least one connected device. Smartwatches have risen in popularity in recent years and provide many opportunities for the collection of health data, which for the users may lead to privacy problems. The purpose of this thesis is to research the attitudes of students (20-30 years old) using smartwatches based on the theory of the privacy paradox. The theory of the privacy paradox explores the relationship between the user’s behavioral intention based on the risks and their actual behavior based on trust. To fulfill the purpose the researchers conducted five qualitative interviews. The authors then analyzed the answers about how students relate to risk, trust, privacy and their awareness of sharing health data. It turned out that the students were optimistic about data sharing but the view of privacy differed. Risk was something that they said was taken into account but when the students actual decisions were made, such as downloading an application, the outcome was based on trust.
35

Integritet online ur olika generationers perspektiv : En studie om hur generation digital natives & pre-internet värdesätter sin integritet online

König, Lovisa, Romney, Ellen January 2020 (has links)
Den digital utvecklingen har gjort att vi idag rör oss i digitala miljöer för att jobba, kommunicera, söka information, shoppa och underhållas. Företag kan idag kan spara, kartlägga, mäta och analysera privatpersoners aktiviteter online vilket ställt högre krav på näringsidkare att hantera denna personliga information korrekt. Ett exempel på detta är GDPR som infördes under 2018. Lagen har lyft frågan om integritet online och gjort gemene man mer medveten om informationsinsamlingen som pågår runt oss då företagen är skyldiga att informera om den. Syftet med denna studie är att se hur två olika grupper, de generationer som har växt upp med internet sen barnsben gentemot de generationer som har tagit till sig internet vid vuxen ålder, resonerar kring integritet online. Vi vill se vilka skillnader/likheter som finns mellan grupperna, ifall det finns faktorer utöver ålder som är avgörande och ifall det finns någon paradox mellan åsikter och agerande i praktiken. I slutändan ämnar vi kunna ge praktiska råd kring hur företag ska kunna hantera konsumenternas integritet online. För att undersöka detta har vi studerat tidigare forskning på området samt gjort fem djupintervjuer inom vardera grupp. Den teoretiska referensramen innehåller teorier om the privacy paradox, medvetenhet kring informationsinsamling, Communication Privacy Management och Customer Relationship Management. Även GDPR och riktad marknadsföring behandlas, vilket sammantaget har ställs i relation till det insamlade materialet från respondenterna. Därefter har vi besvarat vår problemformulering: “Hur resonerar generation digital natives i jämförelse med generation pre-internet kring företags hantering av deras integritet online?” Resultatet av studien visar att det viktigaste för bägge respondentgrupper var att det finns ett relevant och tydligt syfte för att de ska delge sin information, samt ifall de får någon typ av kompensation. Den största skillnaden gick att se i deras delgivningsnormer, där den yngre respondentgruppen kände större press att dela med sig på social media och därmed indirekt till företag. Den mest framstående skillnaden på individnivå gällde medvetenhet kring insamling, där det varierade från respondent till respondent. Bägge grupperna känner dock en oro inför framtiden, då de ofta känner sig övervakade online. Denna oro härleds ur en känsla av maktlöshet och okunskap kring hur de kan skydda sin data. I spår av maktlösheten har många skapat sig en fabricerad trygghet där de i brist på kunskap istället hoppas på att de ska skyddas av lagar, regleringar och att vara “en i mängden”. Utifrån våra slutsatser rekommenderar vi företag att informera konsumenterna när och varför insamling sker, ge kompensation i någon form samt skydda den data de innehar.

Page generated in 0.0475 seconds