1 |
Communication Under Stress: Indicators of Veracity and Deception in Written NarrativesAdams, Susan H. 01 May 2002 (has links)
This exploratory study examines linguistic and structural features of written narratives for predictive value in determining the likelihood of veracity or deception. Sixty narratives written by suspects and victims identified through the investigation of criminal incidents provided the database. The law enforcement context allowed for the examination of communication under stress. Using a retrospective approach, the veracity or deception of the narratives had already been determined; therefore, the study was able to focus on the degree to which selected linguistic and structural attributes were able to predict veracity and deception.
Six research questions guided the study, drawn from theoretical works and research in psychology, linguistics, and criminal justice. Three questions asked whether a positive relationship exists between deception of the narratives and the narrative attributes of equivocation, negation, and relative length of the prologue partition. Three questions asked whether a positive relationship exists between veracity of the narratives and unique sensory details, emotions in the conclusion partition, and quoted discourse. Support was found for the three questions relating to deception and for a relationship between veracity and unique sensory details. Weak support was found for a relationship between veracity and emotions in the conclusion partition. No relationship was found with veracity and the general category of quoted discourse. When quoted discourse without quotation marks was examined separately, a weak relationship with veracity was found. An additional finding was a relationship between relative length of the criminal incident partition and veracity.
A logistic regression model was developed to predict veracity or deception using the six predictors from the research questions. The resulting model correctly classified the examined narratives at an 82.1% classification level. The most significant predictor of veracity was unique sensory details; the most significant predictor of deception was length of the prologue partition.
The analysis of the examined narratives written by suspects and victims suggests that linguistic and structural features of written narratives are predictive of the likelihood of veracity and deception. These results lend support to the Undeutsch Hypothesis (1989) that truthful narratives differ from fabricated narratives in structure and content. / Ph. D.
|
2 |
The Dangers of Speaking a Second Language: An Investigation of Lie Bias and Cognitive LoadDippenaar, Andre 21 January 2021 (has links)
Today's world is an interconnected global village. Communication and business transactions are increasingly conducted in non-native languages. Literature suggests that biases are present when communicating in non-native languages; that a truth bias is present in first language communication, and a lie bias in second language communication. Less than 10% of South Africa's population identifies with English, the lingua franca of the country, as a first language. Not much research in the presence of bias in second language communication has been published in the South African multi-lingual context. This study evaluated the presences of bias within deception frameworks such as the Truth Default State and the veracity effect. This study investigated whether deception detection can be improved by modifying the conditions under which statements are given by placing statement providers under cognitive load. The accuracy of veracity judgment language profiling software, LIWC2015, using published deception language profiles was compared against the results of the participating veracity judges. Results of the study were mixed. It was consistent with extant literature in a presence of a truth bias overall, but mixed in terms of a lie bias. The results supported the Truth Default Theory and veracity effect frameworks. LIWC2015 performed marginally better than human judges in evaluating veracity.
|
3 |
Toward Attack-Resistant Distributed Information Systems by Means of Social TrustSirivianos, Michael January 2010 (has links)
<p>Trust has played a central role in the design of open distributed systems that span distinct administrative domains. When components of a distributed system can assess the trustworthiness of their peers, they are in a better position to interact with them. There are numerous examples of distributed systems that employ trust inference techniques to regulate the interactions of their components including peer-to-peer file sharing systems, web site and email server reputation services and web search engines.</p>
<p>The recent rise in popularity of Online Social Networking (OSN) services has made an additional dimension of trust readily available to system designers: social trust. By social trust, we refer to the trust information embedded in social links as annotated by users of an OSN. This thesis' overarching contribution is methods for employing social trust embedded in OSNs to solve two distinct and significant problems in distributed information systems. </p>
<p>The first system proposed in this thesis assesses the ability of OSN users to correctly classify online identity assertions. The second system assesses the ability of OSN users to correctly configure devices that classify spamming hosts. In both systems, an OSN user explicitly ascribes to his friends a value that reflects how trustworthy he considers their classifications. In addition, both solutions compare the classification input of friends to obtain a more accurate measure of their pairwise trust. Our solutions also exploit trust transitivity over the social network to assign trust values to the OSN users. These values are used to weigh the classification input by each user in order to derive an aggregate trust score for the identity assertions or the hosts.</p>
<p>In particular, the first problem involves the assessment of the veracity of assertions on identity attributes made by online users. Anonymity is one of the main virtues of the Internet. It protects privacy and freedom of speech, but makes it hard to assess the veracity of assertions made by online users concerning their identity attributes (e.g, age or profession.) We propose FaceTrust, the first system that uses OSN services to provide lightweight identity credentials while preserving a user's anonymity. FaceTrust employs a ``game with a purpose'' design to elicit the</p>
<p>opinions of the friends of a user about the user's self-claimed identity attributes, and uses attack-resistant trust inference to compute veracity scores for the attributes. FaceTrust then provides credentials, which a user can use to corroborate his online identity assertions. </p>
<p>We evaluated FaceTrust using a crawled social network graph as well as a real-world deployment. The results show that our veracity scores strongly correlate with the ground truth, even when a large fraction of the social network users are dishonest. For example, in our simulation over the sample social graph, when 50% of users were dishonest and each user employed 1000 Sybils, the false assertions obtained approximately only 10% of the veracity score of the true assertions. We have derived the following lessons from the design and deployment of FaceTrust: a) it is plausible to obtain a relatively reliable measure of the veracity of identity assertions by relying on the friends of the user that made the assertion to classify them, and by employing social trust to determine the trustworthiness of the classifications; b) it is plausible to employ trust inference over the social graph to effectively mitigate Sybil attacks; c) users tend to mostly correctly classify their friends' identity assertions.</p>
<p>The second problem in which we apply social trust involves assessing the trustworthiness of reporters (detectors) of spamming hosts in a collaborative spam mitigation system. Spam mitigation can be broadly classified into two main approaches: a) centralized security infrastructures that rely on a limited number of trusted monitors (reporters) to detect and report malicious traffic; and b) highly distributed systems that leverage the experiences of multiple nodes within distinct trust domains. The first approach offers limited threat coverage and slow response times, and it is often proprietary. The second approach is not widely adopted, partly due to the </p>
<p>lack of assurances regarding the trustworthiness of the reporters. </p>
<p>Our proposal, SocialFilter, aims to achieve the trustworthiness of centralized security services and the wide coverage, responsiveness, and inexpensiveness of large-scale collaborative spam mitigation. It enables nodes with no email classification functionality to query the network on whether a host is a spammer. SocialFilter employs trust inference to weigh the reports concerning spamming hosts that collaborating reporters submit to the system. To the best of our knowledge, </p>
<p>it is the first collaborative threat mitigation system that assesses the trustworthiness of the reporters by both auditing their reports and by leveraging the social network of the reporters' human administrators. Subsequently, SocialFilter weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. </p>
<p>We performed a simulation-based evaluation of SocialFilter, which indicates its potential: </p>
<p>during a simulated spam campaign, SocialFilter classified correctly 99% of spam, while yielding no false positives. The design and evaluation of SocialFilter offered us the following lessons: a) it is plausible to introduce Sybil-resilient OSN-based trust inference mechanisms to improve the reliability and the attack-resilience of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers (spammer reports) can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra); c) unlike Ostra, SocialFilter yields no false positives. We believe that the design lessons from SocialFilter are applicable to other collaborative entity classification systems.</p> / Dissertation
|
4 |
BIG DATA : From hype to realityDanesh, Sabri January 2014 (has links)
Big data is all of a sudden everywhere. It is too big to ignore!It has been six decades since the computer revolution, four decades after the development of the microchip, and two decades of the modern Internet! More than a decade after the 90s “.com” fizz, can Big Data be the next Big Bang? Big data reveals part of our daily lives. It has the potential to solve virtually any problem for a better urbanized global. Big Data sources are also very interesting from an official statistics point of view. The purpose of this paper is to explore the conceptions of big data and opportunities and challenges associated with using big data especially in official statistics. “A petabyte is the equivalent of 1,000 terabytes, or a quadrillion bytes. One terabyte is a thousand gigabytes. One gigabyte is made up of a thousand megabytes. There are a thousand thousand—i.e., a million—petabytes in a zettabyte” (Shaw 2014). And this is to be continued…
|
5 |
The Effect of Cognitive Load on DeceptionPatterson, Terri 02 October 2009 (has links)
The current study applied classic cognitive capacity models to examine the effect of cognitive load on deception. The study also examined whether the manipulation of cognitive load would result in the magnification of differences between liars and truth-tellers. In the first study, 87 participants engaged in videotaped interviews while being either deceptive or truthful about a target event. Some participants engaged in a concurrent secondary task while being interviewed. Performance on the secondary task was measured. As expected, truth tellers performed better on secondary task items than liars as evidenced by higher accuracy rates. These results confirm the long held assumption that being deceptive is more cognitively demanding than being truthful. In the second part of the study, the videotaped interviews of both liars and truth-tellers were shown to 69 observers. After watching the interviews, observers were asked to make a veracity judgment for each participant. Observers made more accurate veracity judgments when viewing participants who engaged in a concurrent secondary task than when viewing those who did not. Observers also indicated that participants who engaged in a concurrent secondary task appeared to think harder than participants who did not. This study provides evidence that engaging in deception is more cognitively demanding than telling the truth. As hypothesized, having participants engage in a concurrent secondary task led to the magnification of differences between liars and truth tellers. This magnification of differences led to more accurate veracity rates in a second group of observers. The implications for deception detection are discussed.
|
6 |
Detecting and Mitigating Rumors in Social MediaIslam, Mohammad Raihanul 19 June 2020 (has links)
The penetration of social media today enables the rapid spread of breaking news and other developments to millions of people across the globe within hours. However, such pervasive use of social media by the general masses to receive and consume news is not without its attendant negative consequences as it also opens opportunities for nefarious elements to spread rumors or misinformation. A rumor generally refers to an interesting piece of information that is widely disseminated through a social network and whose credibility cannot be easily substantiated. A rumor can later turn out to be true or false or remain unverified. The spread of misinformation and fake news can lead to deleterious effects on users and society. The objective of the proposed research is to develop a range of machine learning methods that will effectively detect and characterize rumor veracity in social media. Since users are the primary protagonists on social media, analyzing the characteristics of information spread w.r.t. users can be effective for our purpose. For our first problem, we propose a method of computing user embeddings from underlying social networks. For our second problem, we propose a long short-term memory (LSTM) based model that can classify whether a story discussed in a thread can be categorized as a false, true, or unverified rumor. We demonstrate the utility of user features computed from the first problem to address the second problem. For our third problem, we propose a method that uses user profile information to detect rumor veracity. This method has the advantage of not requiring the underlying social network, which can be tedious to compute. For the last problem, we investigate a rumor mitigation technique that recommends fact-checking URLs to rumor debunkers, i.e., social network users who are very passionate about disseminating true news. Here, we incorporate the influence of other users on rumor debunkers in addition to their previous URL sharing history to recommend relevant fact-checking URLs. / Doctor of Philosophy / A rumor is generally defined as an interesting piece of a story that cannot be authenticated easily. On social networks, a user can generally find an interesting piece of news or story and may share (retweet) it. A story that initially appears plausible can later turn out to be false or remain unverified. The propagation of false rumors on social networks has a deteriorating effect on user experience. Therefore, rumor veracity detection is important, and drawing interest in social network research. In this thesis, we develop various machine learning models that detect rumor veracity. For this purpose, we exploit different types of information regarding users, such as profile details and connectivity with other users etc. Moreover, we propose a rumor mitigation technique that recommends fact-checking URLs to social network users who are passionate about debunking rumors. Here, we leverage similar techniques used in e-commerce sites for recommending products to solve this problem.
|
7 |
Modelo de avaliação de conjuntos de dados científicos por meio da dimensão de veracidade dos dados. / Scientific datasets evaluation model based on the data veracity dimension.André Filipe de Moraes Batista 06 November 2018 (has links)
A ciência é uma organização social: grupos de colaboração independentes trabalham para gerar conhecimento como um bem público. A credibilidade dos trabalhos científicos está enraizada nas evidências que os suportam, as quais incluem a metodologia aplicada, os dados adquiridos e os processos para execução dos experimentos, da análise de dados e da interpretação dos resultados obtidos. O dilúvio de dados sob o qual a atual ciência está inserida revoluciona a forma como as pesquisas são realizadas, resultando em um novo paradigma de ciência baseada em dados. Sob tal paradigma, novas atividades são inseridas no método científico de modo a organizar o processo de geração, curadoria e publicação de dados, beneficiando a comunidade científica com o reuso de conjuntos de dados científicos e a reprodutibilidade de experimentos. Nesse contexto, novas abordagens para a resolução de problemas estão sendo apresentadas, obtendo resultados que antes eram considerados de relevante dificuldade, bem como possibilitando a geração de novos conhecimentos. Diversos portais estão disponibilizando conjuntos de dados resultantes de pesquisas científicas. Todavia, tais portais pouco abordam o contexto sobre os quais os conjuntos de dados foram criados, dificultando a compreensão sobre os dados e abrindo espaço para o uso indevido ou uma interpretação errônea. Poucas são as literaturas que abordam essa problemática, deixando o foco para outros temas que lidam com o volume, a variedade e a velocidade dos dados. Essa pesquisa objetivou definir um modelo de avaliação de conjuntos de dados científicos, por meio da construção de um perfil de aplicação, o qual padroniza a descrição de conjuntos de dados científicos. Essa padronização da descrição é baseada no conceito de dimensão de Veracidade dos dados, definido ao longo da pesquisa, e permite o desenvolvimento de métricas que formam o índice de veracidade de conjuntos de dados científicos. Tal índice busca refletir o nível de detalhamento de um conjunto de dados, com base no uso dos elementos de descrição, que facilitarão o reuso dos dados e a reprodutibilidade dos experimentos científicos. O índice possui duas dimensões: a dimensão intrínseca aos dados, a qual pode ser utilizada como critério de admissão de conjunto de dados em portais de publicação de dados; e a dimensão social, mensurando a adequabilidade de um conjunto de dados para uso em uma área de pesquisa ou de aplicação, por meio da avaliação da comunidade científica. Para o modelo de avaliação proposto, um estudo de caso foi desenvolvido, descrevendo um conjunto de dados proveniente de um projeto científico internacional, o projeto GoAmazon, de modo a validar o modelo proposto entre os pares, demonstrando o potencial da solução no apoio ao reuso dos dados, podendo ser incorporado em portais de dados científicos. / Science is a social organization: independent collaboration groups work to generate knowledge as a public good. The credibility of the scientific work is entrenched in the evidence that supports it, which includes the applied methodology, the acquired data, the processes to execute the experiments, the data analysis, and the interpretation of the obtained results. The flood of data under which current science is embedded revolutionizes the way surveys are conducted, resulting in a new paradigm of data-driven science. Under such a paradigm, new activities are inserted into the scientific method to organize the process of generation, curation, and publication of data, benefiting the scientific community with the reuse and reproducibility of scientific datasets. In this context, new approaches to problem solving are being presented, obtaining results that previously were considered of relevant difficulty, as well as making possible the generation of new knowledge. Several portals are providing datasets resulting from scientific research. However, such portals do little to address the context upon which datasets are created, making it difficult to understand the data and opening up space for misuse or misinterpretation. In the Big Data area, the dimension that proposes to deal with this aspect is called Veracity. Few studies in the literature approach such a theme, focusing on other dimensions, such as volume, variety, and velocity of data. This research aimed to define a of scientific datasets, through the establishment of an application profile, which standardizes the description of scientific datasets. This standardization of the description is based on the veracity dimension concept, which is defined throughout the research and allows the development of metrics that form the Veracity Index of scientific datasets. This index seeks to reflect the level of detail of a dataset based on the use of the descriptive elements, which will facilitate the reuse and reproducibility of the data. The index is weighted by the evaluation of the scientific community in a collaborative sense, which assess the level of description, comprehension capacity, and suitability of the dataset for a given research or application area. For the proposed collaborative evaluation model, a case study was developed that described a dataset from an international scientific project, the GoAmazon project, in order to validate the proposed model among the peers, demonstrating the potential of the solution in the reuse and reproducibility of datasets, showing that such an index can be incorporated into scientific data portals.
|
8 |
Modelo de avaliação de conjuntos de dados científicos por meio da dimensão de veracidade dos dados. / Scientific datasets evaluation model based on the data veracity dimension.Batista, André Filipe de Moraes 06 November 2018 (has links)
A ciência é uma organização social: grupos de colaboração independentes trabalham para gerar conhecimento como um bem público. A credibilidade dos trabalhos científicos está enraizada nas evidências que os suportam, as quais incluem a metodologia aplicada, os dados adquiridos e os processos para execução dos experimentos, da análise de dados e da interpretação dos resultados obtidos. O dilúvio de dados sob o qual a atual ciência está inserida revoluciona a forma como as pesquisas são realizadas, resultando em um novo paradigma de ciência baseada em dados. Sob tal paradigma, novas atividades são inseridas no método científico de modo a organizar o processo de geração, curadoria e publicação de dados, beneficiando a comunidade científica com o reuso de conjuntos de dados científicos e a reprodutibilidade de experimentos. Nesse contexto, novas abordagens para a resolução de problemas estão sendo apresentadas, obtendo resultados que antes eram considerados de relevante dificuldade, bem como possibilitando a geração de novos conhecimentos. Diversos portais estão disponibilizando conjuntos de dados resultantes de pesquisas científicas. Todavia, tais portais pouco abordam o contexto sobre os quais os conjuntos de dados foram criados, dificultando a compreensão sobre os dados e abrindo espaço para o uso indevido ou uma interpretação errônea. Poucas são as literaturas que abordam essa problemática, deixando o foco para outros temas que lidam com o volume, a variedade e a velocidade dos dados. Essa pesquisa objetivou definir um modelo de avaliação de conjuntos de dados científicos, por meio da construção de um perfil de aplicação, o qual padroniza a descrição de conjuntos de dados científicos. Essa padronização da descrição é baseada no conceito de dimensão de Veracidade dos dados, definido ao longo da pesquisa, e permite o desenvolvimento de métricas que formam o índice de veracidade de conjuntos de dados científicos. Tal índice busca refletir o nível de detalhamento de um conjunto de dados, com base no uso dos elementos de descrição, que facilitarão o reuso dos dados e a reprodutibilidade dos experimentos científicos. O índice possui duas dimensões: a dimensão intrínseca aos dados, a qual pode ser utilizada como critério de admissão de conjunto de dados em portais de publicação de dados; e a dimensão social, mensurando a adequabilidade de um conjunto de dados para uso em uma área de pesquisa ou de aplicação, por meio da avaliação da comunidade científica. Para o modelo de avaliação proposto, um estudo de caso foi desenvolvido, descrevendo um conjunto de dados proveniente de um projeto científico internacional, o projeto GoAmazon, de modo a validar o modelo proposto entre os pares, demonstrando o potencial da solução no apoio ao reuso dos dados, podendo ser incorporado em portais de dados científicos. / Science is a social organization: independent collaboration groups work to generate knowledge as a public good. The credibility of the scientific work is entrenched in the evidence that supports it, which includes the applied methodology, the acquired data, the processes to execute the experiments, the data analysis, and the interpretation of the obtained results. The flood of data under which current science is embedded revolutionizes the way surveys are conducted, resulting in a new paradigm of data-driven science. Under such a paradigm, new activities are inserted into the scientific method to organize the process of generation, curation, and publication of data, benefiting the scientific community with the reuse and reproducibility of scientific datasets. In this context, new approaches to problem solving are being presented, obtaining results that previously were considered of relevant difficulty, as well as making possible the generation of new knowledge. Several portals are providing datasets resulting from scientific research. However, such portals do little to address the context upon which datasets are created, making it difficult to understand the data and opening up space for misuse or misinterpretation. In the Big Data area, the dimension that proposes to deal with this aspect is called Veracity. Few studies in the literature approach such a theme, focusing on other dimensions, such as volume, variety, and velocity of data. This research aimed to define a of scientific datasets, through the establishment of an application profile, which standardizes the description of scientific datasets. This standardization of the description is based on the veracity dimension concept, which is defined throughout the research and allows the development of metrics that form the Veracity Index of scientific datasets. This index seeks to reflect the level of detail of a dataset based on the use of the descriptive elements, which will facilitate the reuse and reproducibility of the data. The index is weighted by the evaluation of the scientific community in a collaborative sense, which assess the level of description, comprehension capacity, and suitability of the dataset for a given research or application area. For the proposed collaborative evaluation model, a case study was developed that described a dataset from an international scientific project, the GoAmazon project, in order to validate the proposed model among the peers, demonstrating the potential of the solution in the reuse and reproducibility of datasets, showing that such an index can be incorporated into scientific data portals.
|
9 |
Pravdivost díla Historia Ecclesiastica napsaného Eusebiem Cézarejským / The hidden truth of Historia Ecclesiastica written by Eusebius of CaesareaBrychtová, Petra January 2016 (has links)
Diplomová práce zkoumá jednotlivé kapitoly knihy Církevní Dějiny napsané Eusebiem Cézarejským, jež je označován jako "otec církevních dějin", přesto kniha obsahuje velké množství nejasností, rozporů, nepřesností a její celkový obsah vyznívá jako snaha o apologii raděj nežli seriózní historické dílo. V diplomové práci vycházím z velkého množství pramenných zdrojů od respektovaných učenců v oblasti teologie rané církve a historie. Cílem práce je důkladně prozkoumat jednotlivé kapitoly, které vykazují největší množství problematických částí stejně tak jako závěr, zda Eusebius se pokoušel cíleně "ohnout" pravdu ve své upřímné víře či zda jeho cílem bylo sepsat obranu křesťanství, která se pouze tváří jako seriózní historické dílo. Annotation The master thesis focus on a particular chapter of Historia Ecclesiastica written by Eusebius of Caesarea, who is renowned as "father of church history" although the book contains a number of serious mistakes, interpolations, discrepancies and exaggerations. In its complexity could be perceived as an apologetic writing rather than historical writing. I used a great amount of sources by respected scholars in my master thesis while its aim is research of particular chapters which demonstrate the most controversies. In the conclusion I expect the biggest challenge will...
|
10 |
Relação de substituição de ações em operações de incorporação e incorporação de ações / Share exchange ratio in mergers and merger of sharesCorradini, Luiz Eduardo Malta 16 May 2014 (has links)
O presente trabalho visa a estudar a regulamentação existente a respeito da relação de substituição de ações em operações de incorporação de sociedades e incorporação de ações. Para tanto, serão abordados os dispositivos legais, as posições doutrinárias e as interpretações da jurisprudência sobre o assunto no Brasil e em Direito Comparado. No primeiro capítulo, são estudados os institutos da incorporação e da incorporação de ações, analisando-se a natureza jurídica dessas operações e os procedimentos para realização de cada uma delas. No segundo capítulo, examina-se propriamente a relação de substituição de ações e sua natureza jurídica para, na sequência, analisarmos o arcabouço legal em que se insere a determinação da relação de troca de ações. Nesse contexto, são abordados os critérios, parâmetros e requisitos que norteiam a sua definição, bem como os mecanismos legais previstos pela legislação societária para compor os diferentes interesses envolvidos, notadamente aqueles dos acionistas minoritários e do controlador. O terceiro capítulo analisa algumas especificidades relativas às operações de incorporação que envolvem companhias abertas, operações de incorporação englobando sociedades sob controle comum, diferentes relações de substituição de ações entre ações de diferentes espécies e classes e, por fim, entre ações da mesma espécie e classe. Finalmente, a conclusão arremata as principais ideias discutidas ao longo deste trabalho. / This paper aims at analyzing the existing regulation regarding the share exchange ratio in mergers and merger of shares (incorporação de ações). For this purpose, this work will examine the legal provisions, the different doctrinal positions and case law understandings relating to this matter under Brazilian Law and Comparative Law. The first chapter of this paper presents the concepts of the merger and merger of shares and analyzes the nature of such transactions and the procedures necessary for their performance. The second chapter examines the share exchange ratio and its nature, followed by an analysis of the legal environment in which such share exchange ratio is determined. In this sense, this paper analyzes the criteria, parameters and requirements that drive the determination of the share exchange ratio, as well as the legal provisions established by Corporate Law to assure that all interests involved in such transactions are addressed, mainly those of minority shareholders as opposed to those of the controlling shareholders. The third chapter analyzes certain special situations concerning mergers and mergers of shares that involve publicly held corporations or corporations under common control. This chapter also analyzes special cases regarding the establishment of different share exchange ratios between different types and classes of shares and between shares of the same type and class. Finally, the conclusion of this work summarizes the main ideas discussed herein.
|
Page generated in 0.0741 seconds