1 |
A taxonomy of software bots: towards a deeper understanding of software bot characteristicsLebeuf, Carlene R. 31 August 2018 (has links)
Software bots are becoming increasingly pervasive in our everyday lives. While bots have been around for many decades, recent technological advancements and the adoption of language-based platforms have led to a surge of new ubiquitous software bots. Although many new bots are being built, the terminology used to describe them and their properties are vast, diverse, and often inconsistent. This hinders our ability to study, understand, and classify bots, and restricts our ability to help practitioners design and evaluate their bots.
The overarching goal of this thesis is to provide a deeper understanding of the complexities of modern software bots. To achieve this, I reflect on a multitude of existing software bot definitions and classifications. Moreover, I propose an updated definition for bots and compare them to other bot-like technologies. As my main contribution, I formally define a set of consistent terminology for describing and classifying software bots, through the development of a faceted taxonomy of software bots. The taxonomy focuses on the observable properties and behaviours of software bots, abstracting details pertaining to their structure and implementation, to help safeguard against technological change. To bridge the gap between existing research and the proposed taxonomy, I map the terminology used in previous literature to the terminology used in the software bot taxonomy. Lastly, to make my contributions actionable, I provide guidelines to illustrate how the proposed taxonomy can be leveraged by researchers, practitioners, and users. / Graduate
|
2 |
Types of Bots: Categorization of Accounts Using Unsupervised Machine LearningJanuary 2019 (has links)
abstract: Social media bot detection has been a signature challenge in recent years in online social networks. Many scholars agree that the bot detection problem has become an "arms race" between malicious actors, who seek to create bots to influence opinion on these networks, and the social media platforms to remove these accounts. Despite this acknowledged issue, bot presence continues to remain on social media networks. So, it has now become necessary to monitor different bots over time to identify changes in their activities or domain. Since monitoring individual accounts is not feasible, because the bots may get suspended or deleted, bots should be observed in smaller groups, based on their characteristics, as types. Yet, most of the existing research on social media bot detection is focused on labeling bot accounts by only distinguishing them from human accounts and may ignore differences between individual bot accounts. The consideration of these bots' types may be the best solution for researchers and social media companies alike as it is in both of their best interests to study these types separately. However, up until this point, bot categorization has only been theorized or done manually. Thus, the goal of this research is to automate this process of grouping bots by their respective types. To accomplish this goal, the author experimentally demonstrates that it is possible to use unsupervised machine learning to categorize bots into types based on the proposed typology by creating an aggregated dataset, subsequent to determining that the accounts within are bots, and utilizing an existing typology for bots. Having the ability to differentiate between types of bots automatically will allow social media experts to analyze bot activity, from a new perspective, on a more granular level. This way, researchers can identify patterns related to a given bot type's behaviors over time and determine if certain detection methods are more viable for that type. / Dissertation/Thesis / Presentation Materials for Thesis Defense / Masters Thesis Computer Science 2019
|
3 |
Reglering av användningen av webbrobotar : En kvalitativ studie om synen på webbrobotar / Regulation of the use of web robots : A qualitative study of different views on web robotsRöör, Mika January 2008 (has links)
<p>Regulation of web bots is an analysis of the interest and a collection of discussions about the phenomena web bots. The section that contains the result from the interviews brings up the question about ethical and legal actions and the opposite to those. How the regulation could work is also discussed in the section with the results from the interviews. The discussions were produced by people whose background in one case or another can relate to the phenomena web bots. In that way this study was limited to few, but more profound interviews which would enable analyses of web bots existence. Sources that have been used are earlier research like scientific theses, articles from web places and books that brings up and discuss the technique. The result shows us that the interest of regulation exist with the persons who got interviewed. The view on the phenomena has been that web bots are considered to be tools in an information society. One form of regulation which is pointed out in the result section is informed consent. It implies that users will be informed and give their consent on whether they want to interact with web bots on the specific site the user visits.</p>
|
4 |
Measuring, fingerprinting and catching click-spam in ad networksDave, Vacha Rajendra 11 July 2014 (has links)
Advertising plays a vital role in supporting free websites and smart- phone apps. Click-spam, i.e., fraudulent or invalid clicks on online ads where the user has no actual interest in the advertiser's site, results in advertising revenue being misappropriated by Click-spammers. This revenue also funds malware authors through adware and malware crafted specifically for click-spammers. While some ad networks take active measures to block Click-spam today, the effectiveness of these measures is largely unknown, as they practice security-through-obscurity for fear of malicious parties reverse-engineering their systems. Moreover, advertisers and third parties have no way of independently estimating or defending against Click-spam. This work addresses the click-spam problem in three ways. It proposes the first methodology for advertisers to independently measure Click-spam rates on their ads. Using real world data collected from ten ad networks, it validates the method to identify and perform in-depth analysis on seven ongoing Click-spam attacks not currently caught by major ad networks, high- lighting the severity of Click-spam. Next, it exposes the state of Click-spam defenses by identifying twenty attack signatures that mimic Click-spam attacks in the wild (from Botnets, PTC sites, scripts) that can be easily detected by ad networks, and implements these attacks, and shows that none of the ad networks protect against all the attacks. This also shows that it's possible to reverse engineer click-fraud rules employed by ad networks in spite of the security-through-obscurity practices prominent today. Finally, it shows that it is not just possible, but also desirable to create Click-spam algorithms that do not rely on security-through-obscurity but instead on invariants that are hard for click-spammers to defeat, as such algorithms are inherently more robust and can catch a wide variety of click-fraud attacks. / text
|
5 |
IMPROVING BEHAVIOR OF COMPUTER GAME BOTS USING FICITITOUS PLAYPatel, Ushma Kesha 01 May 2011 (has links)
In modern computer games, `bots' - Intelligent realistic agents play a prominent role in success of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games' player, a player might lose interest in the game. We propose the use of a game theoretic based learning rule called Fictitious Play for improving behavior of these computer game bots which will make them less predictable and hence, more enjoyable to a game player.
|
6 |
Interação com Wikis por meio de Mensageiros Instantâneos / Wiki Interation using Instant Messaging BotsSantos, Rafael Pereira dos 19 February 2009 (has links)
A utilização da Internet cresceu amplamente nos últimos anos e tem propiciado o desenvolvimento de diversas ferramentas de comunicação via web. Têm se destacado, de maneira especial, as ferramentas que possibilitam a disponibilização online de conteúdos diversos, criados pelos próprios usuários, como as wikis. O sucesso obtido pelas wikis deve-se, em grande parte, à pequena quantidade de esforço necessário para a edição das páginas, indicando que esta característica é muito apreciada pelos usuários. Visando tornar o processo de edição de wikis ainda mais ágil, este trabalho apresenta uma proposta de como os mensageiros instantâneos podem auxiliar nesta tarefa. Assim, uma nova nova forma de interação no processo de edição de wikis, por meio de Mensageiro Instantâneo, foi projetada e implementada. Essa forma de interação proposta altera a forma de interação com wikis convencional, no sentido de possibilitar que o autor do conteúdo a ser editado na wiki não necessite mudar de seu ambiente de comunicação, que atualmente tem sido muito utilizado, o de troca de mensagens por meio do mensageiro instantâneo. Além disso, esta pesquisa possibilitou a identificação de diversas vantagens e desvantagens da utilização de bots de mensageiros instantâneos, encontradas na literatura, bem como durante os experimentos e estudos de caso realizados / The Internet usage has grown significantly in recent years and has fomented the development of several communication tools via web. Tools that make available the various online contents created by the users, such as wikis should be especially highlighted. The success achieved by wikis is due in large extent to the small amount of effort required to edit pages. This is an indicator that this feature is very appreciated by users. In order to make the editing process of wikis even faster, this work presents a proposal of using integrated Instant Messaging tools features with wikis. Thus, a new means of interaction in the process of editing in wikis, via Instant Messenger, was designed and implemented. This proposed means of interaction augments the way of interaction with conventional wikis, by enabling authors to edit the wiki content without having to shift from his/her communication environment in use. This proposal is supported by the fact that Instant Messaging systems have been widely used and adopted. Moreover, this research provides evidences to help the identification of advantages and disadvantages of the use of bots in Instant Messaging, from the results of the experiments and case studies conducted
|
7 |
Interação com Wikis por meio de Mensageiros Instantâneos / Wiki Interation using Instant Messaging BotsRafael Pereira dos Santos 19 February 2009 (has links)
A utilização da Internet cresceu amplamente nos últimos anos e tem propiciado o desenvolvimento de diversas ferramentas de comunicação via web. Têm se destacado, de maneira especial, as ferramentas que possibilitam a disponibilização online de conteúdos diversos, criados pelos próprios usuários, como as wikis. O sucesso obtido pelas wikis deve-se, em grande parte, à pequena quantidade de esforço necessário para a edição das páginas, indicando que esta característica é muito apreciada pelos usuários. Visando tornar o processo de edição de wikis ainda mais ágil, este trabalho apresenta uma proposta de como os mensageiros instantâneos podem auxiliar nesta tarefa. Assim, uma nova nova forma de interação no processo de edição de wikis, por meio de Mensageiro Instantâneo, foi projetada e implementada. Essa forma de interação proposta altera a forma de interação com wikis convencional, no sentido de possibilitar que o autor do conteúdo a ser editado na wiki não necessite mudar de seu ambiente de comunicação, que atualmente tem sido muito utilizado, o de troca de mensagens por meio do mensageiro instantâneo. Além disso, esta pesquisa possibilitou a identificação de diversas vantagens e desvantagens da utilização de bots de mensageiros instantâneos, encontradas na literatura, bem como durante os experimentos e estudos de caso realizados / The Internet usage has grown significantly in recent years and has fomented the development of several communication tools via web. Tools that make available the various online contents created by the users, such as wikis should be especially highlighted. The success achieved by wikis is due in large extent to the small amount of effort required to edit pages. This is an indicator that this feature is very appreciated by users. In order to make the editing process of wikis even faster, this work presents a proposal of using integrated Instant Messaging tools features with wikis. Thus, a new means of interaction in the process of editing in wikis, via Instant Messenger, was designed and implemented. This proposed means of interaction augments the way of interaction with conventional wikis, by enabling authors to edit the wiki content without having to shift from his/her communication environment in use. This proposal is supported by the fact that Instant Messaging systems have been widely used and adopted. Moreover, this research provides evidences to help the identification of advantages and disadvantages of the use of bots in Instant Messaging, from the results of the experiments and case studies conducted
|
8 |
Hiding Behind Cards: Identifying Bots and Humans in Online PokerAltman, Benjamin 07 May 2013 (has links)
As online gaming becomes more popular, it has also become increasingly important to identify and remove those who leverage automated player systems (bots). Manual bot detection depends on the ability of game administrators to differentiate between bots and normal players. The objective of this thesis was to determine whether expert poker players can differentiate between bot and human players in Texas Hold ‘Em Poker. Participants were deceived into thinking a number of bots and humans were playing in gameplay videos and asked to rate player botness and skill. Results showed that participants made similar observations about player behaviour, yet used these observations to reach differing conclusions about whether a given player was a bot or a human. These results cast doubt on the reliability of manual bot detection systems for online poker, yet also show that experts agree on what constitutes skilled play within such an environment.
|
9 |
Fantastic bots and where to find themSvenaeus, Agaton January 2020 (has links)
Research on bot detection on online social networks has received a considerable amount of attention in Swedish news media. Recently however, criticism of the research field of bot detection on onlinesocial networks has been presented, highlighting the need to investigate the research field to determine if information based on flawed research has been spread. To investigate the research field, this study has attempted to review the process of bot detection on online social networks and evaluate the proposed criticism of current bot detection research by: conducting a literature review of bots on online social networks, conducting a literature review of methods for bot detection on online social networks, and detecting bots in three different politically associated data sets with Swedish Twitter accounts usingfive different bot detection methods. Results of the study showed minor evidence that previous research may have been flawed. Still, based on the literature review of bot detection methods, it was determined that this criticism was not extensive enough to critique the research fieldof bot detection on online social networks as a whole. Further, problems highlighted in the criticism were recognized to potentially have arose from a lack of differentiation between bot types in research. An insufficient differentiation between bot types in research was also acknowledged as a factor which could lead to difficulties ingeneralizing the results from bot detection studies measuring the effect of bots on political opinions. Instead, the study acknowledged that a good bot differentiation could potentially improve bot detection.
|
10 |
Reglering av användningen av webbrobotar : En kvalitativ studie om synen på webbrobotar / Regulation of the use of web robots : A qualitative study of different views on web robotsRöör, Mika January 2008 (has links)
Regulation of web bots is an analysis of the interest and a collection of discussions about the phenomena web bots. The section that contains the result from the interviews brings up the question about ethical and legal actions and the opposite to those. How the regulation could work is also discussed in the section with the results from the interviews. The discussions were produced by people whose background in one case or another can relate to the phenomena web bots. In that way this study was limited to few, but more profound interviews which would enable analyses of web bots existence. Sources that have been used are earlier research like scientific theses, articles from web places and books that brings up and discuss the technique. The result shows us that the interest of regulation exist with the persons who got interviewed. The view on the phenomena has been that web bots are considered to be tools in an information society. One form of regulation which is pointed out in the result section is informed consent. It implies that users will be informed and give their consent on whether they want to interact with web bots on the specific site the user visits.
|
Page generated in 0.0172 seconds