• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automated Analysis Techniques for Online Conversations with Application in Deception Detection

Twitchell, Douglas P. January 2005 (has links)
Email, chat, instant messaging, blogs, and newsgroups are now common ways for people to interact. Along with these new ways for sending, receiving, and storing messages comes the challenge of organizing, filtering, and understanding them, for which text mining has been shown to be useful. Additionally, it has done so using both content-dependent and content-independent methods.Unfortunately, computer-mediated communication has also provided criminals, terrorists, spies, and other threats to security a means of efficient communication. However, the often textual encoding of these communications may also provide for the possibility of detecting and tracking those who are deceptive. Two methods for organizing, filtering, understanding, and detecting deception in text-based computer-mediated communication are presented.First, message feature mining uses message features or cues in CMC messages combined with machine learning techniques to classify messages according to the sender's intent. The method utilizes common classification methods coupled with linguistic analysis of messages for extraction of a number of content-independent input features. A study using message feature mining to classify deceptive and non-deceptive email messages attained classification accuracy between 60\% and 80\%.Second, speech act profiling is a method for evaluating and visualizing synchronous CMC by creating profiles of conversations and their participants using speech act theory and probabilistic classification methods. Transcripts from a large corpus of speech act annotated conversations are used to train language models and a modified hidden Markov model (HMM) to obtain probable speech acts for sentences, which are aggregated for each conversation participant creating a set of speech act profiles. Three studies for validating the profiles are detailed as well as two studies showing speech act profiling's ability to uncover uncertainty related to deception.The methods introduced here are two content-independent methods that represent a possible new direction in text analysis. Both have possible applications outside the context of deception. In addition to aiding deception detection, these methods may also be applicable in information retrieval, technical support training, GSS facilitation support, transportation security, and information assurance.
12

The Dangers of Speaking a Second Language: An Investigation of Lie Bias and Cognitive Load

Dippenaar, Andre 21 January 2021 (has links)
Today's world is an interconnected global village. Communication and business transactions are increasingly conducted in non-native languages. Literature suggests that biases are present when communicating in non-native languages; that a truth bias is present in first language communication, and a lie bias in second language communication. Less than 10% of South Africa's population identifies with English, the lingua franca of the country, as a first language. Not much research in the presence of bias in second language communication has been published in the South African multi-lingual context. This study evaluated the presences of bias within deception frameworks such as the Truth Default State and the veracity effect. This study investigated whether deception detection can be improved by modifying the conditions under which statements are given by placing statement providers under cognitive load. The accuracy of veracity judgment language profiling software, LIWC2015, using published deception language profiles was compared against the results of the participating veracity judges. Results of the study were mixed. It was consistent with extant literature in a presence of a truth bias overall, but mixed in terms of a lie bias. The results supported the Truth Default Theory and veracity effect frameworks. LIWC2015 performed marginally better than human judges in evaluating veracity.
13

Linguistic Cues to Deception

Connell, Caroline 05 June 2012 (has links)
This study replicated a common experiment, the Desert Survival Problem, and attempted to add data to the body of knowledge for deception cues. Participants wrote truthful and deceptive essays arguing why items salvaged from the wreckage were useful for survival. Cues to deception considered here fit into four categories: those caused by a deceivers' negative emotion, verbal immediacy, those linked to a deceiver's attempt to appear truthful, and those resulting from deceivers' high cognitive load. Cues caused by a deceiver's negative emotions were mostly absent in the results, although deceivers did use fewer first-person pronouns than truth tellers. That indicated deceivers were less willing to take ownership of their statements. Cues because of deceivers' attempts to appear truthful were present. Deceivers used more words and more exact language than truth tellers. That showed an attempt to appear truthful. Deceivers' language was simpler than that of truth tellers, which indicated a higher cognitive load. Future research should include manipulation checks on motivation and emotion, which are tied to cue display. The type of cue displayed, be it emotional leakage, verbal immediacy, attempts to appear truthful or cognitive load, might be associated with particular deception tasks. Future research, including meta-analyses, should attempt to determine which deception tasks produce which cue type. Revised file, GMc 5/28/2014 per Dean DePauw / Master of Arts
14

Online Deception Detection Using BDI Agents

Merritts, Richard Alan 01 January 2013 (has links)
This research has two facets within separate research areas. The research area of Belief, Desire and Intention (BDI) agent capability development was extended. Deception detection research has been advanced with the development of automation using BDI agents. BDI agents performed tasks automatically and autonomously. This study used these characteristics to automate deception detection with limited intervention of human users. This was a useful research area resulting in a capability general enough to have practical application by private individuals, investigators, organizations and others. The need for this research is grounded in the fact that humans are not very effective at detecting deception whether in written or spoken form. This research extends the deception detection capability research in that typical deception detection tools are labor intensive and require extraction of the text in question following ingestion into a deception detection tool. A neural network capability module was incorporated to lend the resulting prototype Machine Learning attributes. The prototype developed as a result of this research was able to classify online data as either "deceptive" or "not deceptive" with 85% accuracy. The false discovery rate for "deceptive" online data entries was 20% while the false discovery rate for "not deceptive" was 10%. The system showed stability during test runs. No computer crashes or other anomalous system behavior were observed during the testing phase. The prototype successfully interacted with an online data communications server database and processed data using Neural Network input vector generation algorithms within seconds
15

A system of deception and fraud detection using reliable linguistic cues including hedging, disfluencies, and repeated phrases

Humpherys, Sean L. January 2010 (has links)
Given the increasing problem of fraud, crime, and national security threats, assessing credibility is a recurring research topic in Information Systems and in other disciplines. Decision support systems can help. But the success of the system depends on reliable cues that can distinguish deceptive/truthful behavior and on a proven classification algorithm. This investigation aims to identify linguistic cues that distinguish deceivers from truthtellers; and it aims to demonstrate how the cues can successfully classify deception and truth.Three new datasets were gathered: 202 fraudulent and nonfraudulent financial disclosures (10-Ks), a laboratory experiment that asked twelve questions of participants who answered deceptively to some questions and truthfully to others (Cultural Interviews), and a mock crime experiment where some participants stole a ring from an office and where all participants were interviewed as to their guilt or innocence (Mock Crime). Transcribed participant responses were investigated for distinguishing cues and used for classification testing.Disfluencies (e.g., um, uh, repeated phrases, etc.), hedging words (e.g., perhaps, may, etc.), and interjections (e.g., okay, like, etc.) are theoretically developed as potential cues to deception. Past research provides conflicting evidence regarding disfluency use and deception. Some researchers opine that deception increases cognitive load, which lowers attentional resources, which increases speech errors, and thereby increases disfluency use (i.e., Cognitive-Load Disfluency theory). Other researchers argue against the causal link between disfluencies and speech errors, positing that disfluencies are controllable and that deceivers strategically avoid disfluencies to avoid appearing hesitant or untruthful (i.e., Suppression-Disfluency theory). A series of t-tests, repeated measures GLMs, and nested-model design regressions disconfirm the Suppression-Disfluency theory. Um, uh, and interjections are used at an increased rate by deceivers in spontaneous speech. Reverse order questioning did not increase disfluency use. Fraudulent 10-Ks have a higher mean count of hedging words.Statistical classifiers and machine learning algorithms are demonstrated on the three datasets. A feature reduction by backward Wald stepwise with logistic regression had the highest classification accuracies (69%-87%). Accuracies are compared to professional interviewers and to previously researched classification models. In many cases the new models demonstrated improvements. 10-Ks are classified with 69% overall accuracy.
16

Automated Human Screening for Detecting Concealed Knowledge

Twyman, Nathan W. January 2012 (has links)
Screening individuals for concealed knowledge has traditionally been the purview of professional interrogators investigating a crime. But the ability to detect when a person is hiding important information would be of high value to many other fields and functions. This dissertation proposes design principles for and reports on an implementation and empirical evaluation of a non-invasive, automated system for human screening. The screening system design (termed an automated screening kiosk or ASK) is patterned after a standard interviewing method called the Concealed Information Test (CIT), which is built on theories explaining psychophysiological and behavioral effects of human orienting and defensive responses. As part of testing the ASK proof of concept, I propose and empirically examine alternative indicators of concealed knowledge in a CIT. Specifically, I propose kinesic rigidity as a viable cue, propose and instantiate an automated method for capturing rigidity, and test its viability using a traditional CIT experiment. I also examine oculomotor behavior using a mock security screening experiment using an ASK system design. Participants in this second experiment packed a fake improvised explosive device (IED) in a bag and were screened by an ASK system. Results indicate that the ASK design, if implemented within a highly controlled framework such as the CIT, has potential to overcome barriers to more widespread application of concealed knowledge testing in government and business settings.
17

Online Deceit:The Use of Idiosyncratic Cues in Identifying Duplicitous User-generated Content

Christopher R Roland (7011581) 15 August 2019 (has links)
The emergence of online information-seekers harnessing the aggregated experiences of others to evaluate online information has coincided with deceptive entities exploiting this tool to bias judgments. One method through which deceit about user-generated content can occur is through single entities impersonating multiple, independent content providers to saturate content samples. Two studies are introduced to explore how idiosyncratic indicators, features co-occurring between content messages that implicate a higher probability of deceit, can be used as a criterion to identify content that is not independently authored. In Study 1, analyses of a pairwise comparison of hypothetical reviews revealed that ratings of content independence were significantly lower when review pairs co-occurred in the attributes, text, and usernames compared to being heterogenous. In a high-fidelity experiment, Study 2 assessed if the effect of idiosyncratic indicators on independence is increased in the presence of multiple indicators, if it is attenuated with a high number of reviews, and if it impacts factors relevant to the choice selection process. As expected, the findings of Study 1 were replicated in addition to further revealing that the presence of multiple idiosyncratic cues yielded lower independence ratings. An interaction effect with idiosyncratic indicators and high review number was observed such that the effect of the former on independence was attenuated when there were a high number of reviews to obscure the presence of these indicators.
18

The Effects of an Expert System on Novice and Professional Decision Making with Application in Deception Detection

Jensen, Matthew Lynn January 2007 (has links)
One effective way for organizations to capture expert knowledge and experience is to encapsulate it within an expert system (ES) and make that system available to others. While ES users have access to the system's knowledge, they shoulder the difficult task of appropriately incorporating the ES recommendations into the decision-making process.One proposed application of an ES is in the realm of deception detection. Humans are inherently poor at recognizing deception when it occurs and their confidence in their judgments is poorly calibrated to their performance. An ES has the potential to significantly improve deception detection; however, joining an ES and a human decision maker creates many important questions that must be addressed before such a system will be useful in a field environment. These questions concern changes in decision outcomes, decision processes, and the decision maker that result from ES use.To examine these questions, a prototype system was created that implements new and unobtrusive methods of deception detection. Kinesic analysis examines the body movement of a potential deceiver and linguistic analysis reviews the structure of utterances from a potential deceiver. This prototype, complete with explanations, was utilized in two experiments that examined the effects of access to the prototype, accuracy level of the prototype, user training in deception detection, and novice or professional lie-catcher status of the users.Use of the prototype system was found to significantly improve professional and novice accuracy rates and confidence alignment. Training was found to have no effect on novice accuracy rates. Accuracy level of the prototype significantly elevated accuracy rates and confidence alignment among novices; however, this improvement was imperceptible to the novices. Novices using the prototype performed on a level equivalent to professionals using the prototype. Neither professional nor novice users of the prototype exceeded the performance of the prototype system alone. Implications of these findings include emphasizing the development of computer-based tools to detect deception and defining a new role for human users of such tools.
19

Exoneration or Observation? Examining a Novel Difference Between Liars and Truth Tellers

Molinaro, Peter F 26 March 2015 (has links)
Individual cues to deception are subtle and often missed by lay people and law enforcement alike. Linguistic statement analysis remains a potentially useful way of overcoming individual diagnostic limitations (e.g. Criteria based Content Analysis; Steller & Köhnken, 1989; Reality monitoring; Johnson & Raye, 1981; Scientific Content Analysis; Sapir, 1996). Unfortunately many of these procedures are time-consuming, require in-depth training, as well as lack empirical support and/or external validity. The current dissertation develops a novel approach to statement veracity analysis that is simple to learn, easy to administer, theoretically sound, and empirically validated. Two strategies were proposed for detecting differences between liars' and truth-tellers' statements. Liars were hypothesized to strategically write statements with the goal of self-exoneration. Liars' statements were predicted to contain more first person pronouns and fewer third person pronouns. Truth-tellers were hypothesized to be motivated toward being informative and thus produce statements with fewer first person pronouns and more third person pronouns. Three studies were conducted to test this hypothesis. The first study explored the verbal patterns of exoneration and informativeness focused statements. The second study used a traditional theft paradigm to examine these verbal patterns in guilty liars and innocent truth tellers. In the third study to better match the context of a criminal investigation a cheating paradigm was used in which spontaneous lying was induced and written statements were taken. Support for the first person pronoun hypothesis was found. Limited support was found for the third person pronoun hypothesis. Results, implications, and future directions for the current research are discussed.
20

The Effect of Cognitive Load on Deception

Patterson, Terri 02 October 2009 (has links)
The current study applied classic cognitive capacity models to examine the effect of cognitive load on deception. The study also examined whether the manipulation of cognitive load would result in the magnification of differences between liars and truth-tellers. In the first study, 87 participants engaged in videotaped interviews while being either deceptive or truthful about a target event. Some participants engaged in a concurrent secondary task while being interviewed. Performance on the secondary task was measured. As expected, truth tellers performed better on secondary task items than liars as evidenced by higher accuracy rates. These results confirm the long held assumption that being deceptive is more cognitively demanding than being truthful. In the second part of the study, the videotaped interviews of both liars and truth-tellers were shown to 69 observers. After watching the interviews, observers were asked to make a veracity judgment for each participant. Observers made more accurate veracity judgments when viewing participants who engaged in a concurrent secondary task than when viewing those who did not. Observers also indicated that participants who engaged in a concurrent secondary task appeared to think harder than participants who did not. This study provides evidence that engaging in deception is more cognitively demanding than telling the truth. As hypothesized, having participants engage in a concurrent secondary task led to the magnification of differences between liars and truth tellers. This magnification of differences led to more accurate veracity rates in a second group of observers. The implications for deception detection are discussed.

Page generated in 0.1511 seconds