• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1237
  • 167
  • 137
  • 109
  • 83
  • 70
  • 38
  • 38
  • 36
  • 21
  • 18
  • 12
  • 12
  • 12
  • 12
  • Tagged with
  • 2380
  • 641
  • 556
  • 520
  • 508
  • 352
  • 332
  • 308
  • 299
  • 235
  • 234
  • 218
  • 210
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Private Interests in the Public Domain: Provacy and Confidentiality in Observational Health Research

Emerson, Claudia I. A. 09 1900 (has links)
The expectation of privacy and confidentiality in health care presents a unique dilemma for public health interests. A great deal of observational health research such as epidemiological studies, disease surveillance, and quality assurance depends on access and use of personal information in the absence of individual consent. Understandably, this raises concerns about personal privacy since sensitive disclosures of information can result in harm such as stigma, discrimination, and loss of socio-economic goods. However, the issue has been largely framed and discussed as a dichotomy: the privacy interest of the individual versus the social interest in research. to individualist paradigm informed by a traditional liberal conception of privacy that emphasizes autonomy drives this dichotomy and inevitably leads to an intractable conflict. In this thesis, I attempt to re-frame the issue by moving away from individualism in shifting the focus towards confidentiality which is relational and founded on trust. I argue that confidentiality is broader than the concern for individual privacy and is thus capable of capturing other relevant interests, such as collective and social interests. I advance a broad conception of confidentiality grounded in a mixed deontic-consequentialist moral framework that can account for respect for persons and social interests. / Thesis / Doctor of Philosophy (PhD)
712

Data Cleaning with Minimal Information Disclosure

Gairola, Dhruv 11 1900 (has links)
Businesses analyze large datasets in order to extract valuable insights from the data. Unfortunately, most real datasets contain errors that need to be corrected before any analysis. Businesses can utilize various data cleaning systems and algorithms to automate the correction of data errors. Many systems correct the data errors by using information present within the dirty dataset itself. Some also incorporate user feedback in order to validate the quality of the suggested data corrections. However, users are not always available for feedback. Hence, some systems rely on clean data sources to help with the data cleaning process. This involves comparing records between the dirty dataset and the clean dataset in order to detect high quality fixes for the erroneous data. Every record in the dirty dataset is compared with every record in the clean dataset in order to find similar records. The values of the records in the clean dataset can be used to correct the values of the erroneous records in the dirty dataset. Realistically, comparing records across two datasets may not be possible due to privacy reasons. For example, there are laws to restrict the free movement of personal data. Additionally, different records within a dataset may have different privacy requirements. Existing data cleaning systems do not factor in these privacy requirements on the respective datasets. This motivates the need for privacy aware data cleaning systems. In this thesis, we examine the role of privacy in the data cleaning process. We present a novel data cleaning framework that supports the cooperation between the clean and the dirty datasets such that the clean dataset discloses a minimal amount of information and the dirty dataset uses this information to (maximally) clean its data. We investigate the tradeoff between information disclosure and data cleaning utility, modelling this tradeoff as a multi-objective optimization problem within our framework. We propose four optimization functions to solve our optimization problem. Finally, we perform extensive experiments on datasets containing up to 3 million records by varying parameters such as the error rate of the dataset, the size of the dataset, the number of constraints on the dataset, etc and measure the impact on accuracy and performance for those parameters. Our results demonstrate that disclosing a larger amount of information within the clean dataset helps in cleaning the dirty dataset to a larger extent. We find that with 80% information disclosure (relative to the weighted optimization function), we are able to achieve a precision of 91% and a recall of 85%. We also compare our algorithms against each other to discover which ones produce better data repairs and which ones take longer to find repairs. We incorporate ideas from Barone et al. into our framework and show that our approach is 30% faster, but 7% worse for precision. We conclude that our data cleaning framework can be applied to real-world scenarios where controlling the amount of information disclosed is important. / Thesis / Master of Computer Science (MCS) / Businesses analyze large datasets in order to extract valuable insights from the data. Unfortunately, most real datasets contain errors that need to be corrected before any analysis. Businesses can utilize various data cleaning systems and algorithms to automate the correction of data errors. Many systems correct the data errors by using information present within the dirty dataset itself. Some also incorporate user feedback in order to validate the quality of the suggested data corrections. However, users are not always available for feedback. Hence, some systems rely on clean data sources to help with the data cleaning process. This involves comparing records between the dirty dataset and the clean dataset in order to detect high quality fixes for the erroneous data. Every record in the dirty dataset is compared with every record in the clean dataset in order to find similar records. The values of the records in the clean dataset can be used to correct the values of the erroneous records in the dirty dataset. Realistically, comparing records across two datasets may not be possible due to privacy reasons. For example, there are laws to restrict the free movement of personal data. Additionally, different records within a dataset may have different privacy requirements. Existing data cleaning systems do not factor in these privacy requirements on the respective datasets. This motivates the need for privacy aware data cleaning systems. In this thesis, we examine the role of privacy in the data cleaning process. We present a novel data cleaning framework that supports the cooperation between the clean and the dirty datasets such that the clean dataset discloses a minimal amount of information and the dirty dataset uses this information to (maximally) clean its data. We investigate the tradeoff between information disclosure and data cleaning utility, modelling this tradeoff as a multi-objective optimization problem within our framework. We propose four optimization functions to solve our optimization problem. Finally, we perform extensive experiments on datasets containing up to 3 million records by varying parameters such as the error rate of the dataset, the size of the dataset, the number of constraints on the dataset, etc and measure the impact on accuracy and performance for those parameters. Our results demonstrate that disclosing a larger amount of information within the clean dataset helps in cleaning the dirty dataset to a larger extent. We find that with 80% information disclosure (relative to the weighted optimization function), we are able to achieve a precision of 91% and a recall of 85%. We also compare our algorithms against each other to discover which ones produce better data repairs and which ones take longer to find repairs. We incorporate ideas from Barone et al. into our framework and show that our approach is 30% faster, but 7% worse for precision. We conclude that our data cleaning framework can be applied to real-world scenarios where controlling the amount of information disclosed is important.
713

ESSAYS ON ONLINE IDENTITY DISCLOSURE AND DISCOVERY

Kwon, Youngjin, 0000-0002-0795-9578 08 1900 (has links)
With many kinds of personal information becoming available online in the past decades, this dissertation addresses the personal, managerial, and societal implications of personal information online that used to be private in the past. Essay One (Chapter 2) investigates the role of social information (such as names and profile photos) in racial discrimination against Blacks using a correspondence method on an online rental housing platform. It examines whether Blacks with non-Black-sounding names are discriminated against, compared to those with Black-sounding names or Whites, when race is signaled through profile photos. In addition, it studies whether building less complete profiles (e.g., using pseudonyms or not presenting profile photos) impartially hurts Blacks and Whites. Essay Two (Chapter 3) compares involuntary discovery and voluntary disclosure of personal information (invisible stigma) in a hiring context. It examines how the two modes of learning about job applicants’ social media differently influence hiring outcomes. Essay Three (Chapter 4) looks at party identity as antecedents of online privacy decisions for public safety such as personal data for contact tracing and crime detection. Additionally, it investigates two interventions that promote online privacy decisions for public safety when party identity is salient: deemphasis on party identity and recategorization as national identity. Overall, this dissertation contributes to the literature on information systems, social psychology, and economics by highlighting the role of digital technology in enabling a greater depth of identity disclosure and discovery and thus changing the landscape of perception and decision-making today. / Business Administration/Management Information Systems
714

Dialogue Systems Specialized in Social Influence: Systems, Methods, and Ethics

Shi, Weiyan January 2023 (has links)
This thesis concerns the task of how to develop dialogue systems specialized in social influence and problems around deploying such systems. Dialogue systems have become widely adopted in our daily life. Most dialogue systems are primarily focused on information-seeking tasks or social companionship. However, they cannot apply strategies in complex and critical social influence tasks, such as healthy habit promotion, emotional support, etc. In this work, we formally define social influence dialogue systems to be systems that influence users’ behaviors, feelings, thoughts, or opinions through natural conversations. We also present methods to make such systems intelligible, privacy-preserving, and thus deployable in real life. Finally, we acknowledge potential ethical issues around social influence systems and propose solutions to mitigate them in Chapter 6. Social influence dialogues span various domains, such as persuasion, negotiation, and recommendation. We first propose a donation persuasion task, PERSUASIONFORGOOD, and ground our study on this persuasion task for social good. We then build a persuasive dialogue system, by refining the dialogue model for intelligibility and imitating human experts for persuasiveness, and a negotiation agent that can play the game of Diplomacy by decoupling the planning engine and the dialogue generation module to improve controllability of social influence systems. To deploy such a system in the wild, our work examines how humans perceive the AI agent’s identity, and how their perceptions impact the social influence outcome. Moreover, dialogue models are trained on conversations, where people could share personal information. This creates privacy concerns for deployment as the models may memorize private information. To protect user privacy in the training data, our work develops privacy-preserving learning algorithms to ensure deployed models are safe under privacy attacks. Finally, deployed dialogue agents have the potential to integrate human feedback to continuously improve themselves. So we propose JUICER, a framework to make use of both binary and free-form textual human feedback to augment the training data and keep improving dialogue model performance after deployment. Building social influence dialogue systems enables us to research future expert-level AI systems that are accessible via natural languages, accountable with domain knowledge, and privacy-preserving with privacy guarantees.
715

The Impact of Artificial Intelligence on Data Protection: A Legal Analysis

Dos Santos, Ana Paula 01 April 2020 (has links) (PDF)
This study explores the implications of artificial intelligence innovation on privacy, data protection regulations, and other related laws. With the spread of data endangering privacy, it is a difficult task to protect the “right to be let alone,” considered as an individuals’ liberty and a fundamental right. This research has shown that at the same time, the use of personal information by artificial intelligence can impact an individual’s privacy. Artificial intelligence also brings conjecturable, incredible, and useful innovation that benefits humans. The analysis of the enacted laws in the European Union, China, and the United States on data protection regulations demonstrates that the laws are not sufficient to prevent the challenges raised by artificial intelligence. This thesis discusses the great importance of the subject matter to society, the several impacts it can foment and the lack of regulations to avoid the outcome
716

Privacy Issues in Young Onset Colorectal Cancer Patients and Survivors

Hecklinski, Tiffany Marie 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The occurrence of colorectal cancer among those over the age of 50 is decreasing; conversely, the rate of diagnosis for those under 50 years old is increasing. While medical researchers scramble to identify the cause for this increase, young onset colorectal cancer (YOCC) patients and survivors are left to navigate a new normal. This new normal often includes awkward and troublesome concerns such as scarring, colostomy bags, and bowel problems. Contrary to those diagnosed with colorectal cancer later in life, those that are diagnosed at a younger age are forced to deal with these issues for many years. The purpose of this exploratory study was to identify privacy issues surrounding YOCC. Because of the significant increase in diagnoses, YOCC is now being researched independently from colorectal cancer in general. The topic of privacy has been researched in academic disciplines, including medicine. Privacy issues surrounding cancer have been researched, as well. Yet, the topic of privacy concerns facing YOCC patients/survivors has been overlooked. It is important to identify privacy concerns specific to YOCC patients/survivors as the information could help health care providers, communication scholars, and caregivers. Patient narratives were analyzed employing thematic analysis to identify privacy concerns of YOCC patients/survivors through the lens of Communication Privacy Management theory (CPM theory). Results indicated that participants discussed disclosure of their YOCC journey as a process. Within this disclosure process, YOCC patients/survivors identified specific privacy issues that influenced the way they disclosed or concealed information specific to their illness. There is a growing need for more research into the YOCC community due to the increase in diagnosis rates and their unique privacy concerns. Potential topics for future research include the impact of COVID-19, patient desire to help others, social media influence on disclosure, how patient disclosure could impact provider training, dating with YOCC, and specific demographic research.
717

Evaluating the Privacy Risks of Inferring Significant Life Events of People from Their Posts on Social Networks

Long, Ruyun 26 January 2021 (has links)
No description available.
718

Disengagement Behavior on Online Social Network the Impact of Fear of Missing Out and Addiction

Sharma, Shwadhin 14 August 2015 (has links)
Most previous research on online social networks (OSNs) has focused on the adoption and continuation of OSN as it is a newer form of social media the usage of which has increased over time. However, very little research has explored the discontinuation of users from OSN usage. Using disengagement theory, this study examines the roles of fear of missing out and addiction along with other factors such as victimization, well-being, privacy concerns, alternative attractiveness, and social influence in the disengagement process from OSN usage. The proposed conceptual model is evaluated using survey design. A preliminary investigation consisting of expert panel review, pretest, and pilot test is conducted to ensure measurement validity. A primary investigation consisting of reliability and validity testing, model fit test (i.e. goodness of fit), common method bias test, and t-test is conducted to ensure validity of structural model. The data are analyzed to recommend the findings. The study found that intention to disengage from OSN leads to actual disengagement, thus, bridging the gap between intention and actual behavior. Attractive alternatives to existing OSN, privacy concerns, and negative psychosocial wellbeing were found to positively influence intention to disengage from a specific OSN. Perceived enjoyment and social influence were found to negatively affect intention disengage from OSN. The findings also indicated that the influence of alternative attractiveness on intention to disengage from an OSN will be moderated by the fear of missing out, such that the influence will be weaker. Similarly, the influence of negative psychosocial well-being on intention to disengage from an OSN will be moderated by the fear of missing out, such that the influence will be weaker. These findings contribute to the information systems and OSN research literature by introducing several theories to expand the concepts of fear of missing out and addiction in studying disengagement process from OSN usage. Besides, there are several implications of this research on practice such as understanding the impact of dark sides of OSNs in a user’s disengagement process from OSN usage.
719

Privacy, Control, and the Use of Information Technology: The Development, Validation, and Testing of the Privacy-Invasive Perceptions Scale

Bakke, Sharen A. 19 April 2006 (has links)
No description available.
720

Personalized Credential Negotiation Based on Policy Individualization in Federation

Bobade, Kailas B. 02 December 2009 (has links)
No description available.

Page generated in 0.025 seconds