The ability to correctly judge moral character—an individual’s disposition to think, feel, and behave ethically—is critical considering the negative consequences of misjudgment (e.g., being betrayed or swindled). However, it is currently unknown whether people can reliably detect strangers’ moral character, nor is it known how to best elicit relevant information from strangers to determine their moral character. This research is designed to remedy this dearth in our understanding of moral character judgments, particularly in settings where we need to make prompt evaluations of strangers based on limited information that we obtained from them. The biggest challenge in assessing another person’s moral character is that it is extremely socially desirable, and therefore highly susceptible to distorted self-perceptions and impression management. To address this problem, I propose and test a new person-perception theory: the hidden information distribution and evaluation (HIDE) model. In chapter 1, I develop the HIDE model, which posits that there are aspects of information that individuals do not correctly know about themselves (which I call the hiddenself), as well as aspects of information individuals misrepresent to others (which I call the hiding-self). This model articulates when and why judges (i.e., evaluators) not personally acquainted with targets of evaluation (e.g., job applicants) can reliably detect these targets’ moral character and predict their future unethical behavior. In particular, I propose that the impromptu thinking and language usage that arises when a person answers specially designed interview questions reveal information about his/her hidden-self and hiding-self, enabling a group of judges to make valid judgments about his/her moral character. Additionally, the HIDE model predicts that judges’ evaluations using this written interview method will be more valid than evaluations provided by targets’ acquaintances. This is because social relationships can lead people to form biased impressions of targets they are acquainted with, so that they are unable to see the targets’ hidden selves as clearly as judges who do not know the targets. In chapter 2, I test the HIDE model’s prediction that groups of judges can reliably predict targets’ unethical behavior by evaluating their moral character using the written interview method. In studies 1 and 2, large groups of judges were crowd-sourced online. I show that their average moral character evaluations successfully predicts targets’ frequency of unethical behaviors in the laboratory (study 1) and the workplace (study 2). Study 3 extends these findings by determining the minimum number of judges (six) required to make moral character evaluations that predict unethical behavior. In chapter 3, I test the HIDE model’s prediction that judges’ evaluations based on the written interview method can capture unique information about targets’ hidden-self. Three empirical studies (studies 4, 5, and 6) show that these evaluations indeed capture unique variance in targets’ moral character that is missed by both self-reports and ratings provided by targets’ acquaintances. Consequently, these evaluations are more predictive of targets’ unethical behavior than the ratings provided by either the targets themselves or their acquaintances. In chapter 4, I investigate the HIDE model’s prediction that judges’ evaluations using the written interview method can capture unique information about targets’ hiding-self. This occurs because responses to the interview questions reveal implicit aspects of moral character that targets cannot control or fake, even when they want to. In study 7, I manipulated whether targets had an incentive to answer the interview questions in a positively biased manner. I show that judges’ evaluations of targets (based on the interview questions) are actually more predictive of their unethical behavior when targets were motivated to respond in a positively biased manner. Finally, in chapter 5, I carried out text analyses to explore how human judges utilize linguistic cues in written responses to form impressions of moral character, and how these cues predict targets’ unethical behavior. The goal of this chapter is to identify linguistic cues that human judges fail to correctly detect or utilize, and thus to identify shared biases in human perceptions of ethicality. Building on these exploratory text analyses, I discuss the future directions of this research program, especially the potential value of combining human judgments and machine algorithms to boost the accuracy of unethical behavior forecasts.
Identifer | oai:union.ndltd.org:cmu.edu/oai:repository.cmu.edu:dissertations-2237 |
Date | 01 April 2018 |
Creators | Kim, Yeonjeong |
Publisher | Research Showcase @ CMU |
Source Sets | Carnegie Mellon University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Dissertations |
Page generated in 0.0017 seconds