• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 34
  • 15
  • 11
  • 9
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An Examination of Price Dispersion in an Online Retail Marketplace

DiRusso, David January 2010 (has links)
This dissertation is a compilation of three essays that analyze price dispersion in an online retail marketplace. Price dispersion is a measure of the variation in prices that sellers charge for products. Online price dispersion has been thoroughly analyzed in the past decade as it has numerous implications for firm pricing strategy as well as consumer welfare. Chapter 1 of this dissertation offers a literature review of price dispersion research, and discusses key explanations as to why this phenomenon exists on the web. Also, a literature review of shop-bots is presented as they are similar to online marketplaces and form the basis of the three studies. Chapter 2 is the first study, and it establishes the existence of price dispersion in online marketplaces and offers a comparison with price dispersion in shop-bots. It is determined that online marketplaces may have less variation than on shop-bots, yet the price dispersion is still high. Chapter 3 is the second study and it explains much of the dispersion found in the online marketplace through differences in seller service quality and seller reputation. A seller's reputation was found to be the key contributor to variation in the online marketplace hence, study 3, which is chapter 4 of this dissertation, employs an experimental approach designed to offer a perspective of buyers and sellers to determine why price varies with reputation and if consumers value the reputation score. It was determined that buyers prefer sellers with strong long run reputation scores more than sellers with strong short-term reputation scores. Based on these reputation scores sellers want to try to offer a higher price than consumers are willing to pay, and sellers think that a strong score conveys higher levels of trust than buyers believe. This mismatch between how sellers think consumers respond, and how the consumers actually respond could be another driver of price dispersion online. A discussion of the implications of these research studies is offered in Chapter 5. / Business Administration
22

From Diderot to Software Bot: The Evolution of Encyclopedias in Historical Study

Chamberlain, Ryan 26 May 2023 (has links)
No description available.
23

Chat Bots & Voice Control : Applications and limitations of combining Microsoft’s Azure Bot Service and Cognitive Services’ Speech API

Lennartsson, Rasmus, Edqvist, Jonatan January 2019 (has links)
The field of Artificial Intelligence (AI) has seen much development in recent years. This is mostly because of new technologies and computers becoming faster. With actors like Microsoft releasing software to reduce the complexity of the development of AI, the barrier to entry is now lower than ever. Due to this lower barrier, one area that has gained much traction is chat bots, being able to act as first line support for companies. While the technology is evolving fast and updates to Microsoft’s tools are released at an impressive rate, it appears to be difficult for other areas like documentation and ease of use to keep up with this pace. This thesis will explore some tools from Microsoft used in AI development. Areas covered are the Cognitive Services with a focus on speech, the Microsoft Azure Bot Service, the QnA Maker and the bot framework’s web-based client. These tools will be evaluated in the context of a chat bot, taking into account their functionality and development. The result is a chat bot that uses a knowledge base for data storage & answer logic, and a web interface for chat functionalities using the bot framework web-client.
24

Humor prostřednictvím botů na sociální síti Twitter / Humor on Twitter via bot accounts

Neuman, Michal January 2021 (has links)
This paper aims to map the use of humour in the production of posts by selected automated Twitter accounts and users' reactions to such posts. The theoretical basis for this study is primarily provided by theories of humour, through the lens of which the issue of humour via Twitterbots is viewed. To provide a broader context, the phenomenon of 'new media', social media and automation is also introduced, within which Twitterbots are given closer attention. A combination of quantitative and qualitative research techniques were used in the analysis of the selected Twitterbots. Quantitative methods were used to describe the accounts under study and to select a narrower sample for detailed analysis. The qualitative analysis used a content analysis method with inductively and deductively created categories to answer research questions regarding the prevailing theory of humour, forms of responses to the posts and their themes. Within each account, partial results are described, and then combined and interpreted in a broader context in the final discussion.
25

Empirical evaluation of procedural level generators for 2D platform games

Hoeft, Robert, Nieznanska, Agnieszka January 2014 (has links)
Context. Procedural content generation (PCG) refers to algorithmical creation of game content (e.g. levels, maps, characters). Since PCG generators are able to produce huge amounts of game content, it becomes impractical for humans to evaluate them manually. Thus it is desirable to automate the process of evaluation. Objectives. This work presents an automatic method for evaluation of procedural level generators for 2D platform games. The method was used for comparative evaluation of four procedural level generators developed within the research community. Methods. The evaluation method relies on simulation of the human player's behaviour in a 2D platform game environment. It is made up of three components: (1) a 2D platform game Infinite Mario Bros with levels generated by the compared generators, (2) a human-like bot and (3) quantitative models of player experience. The bot plays the levels and collects the data which are input to the models. The generators are evaluated based on the values output by the models. A method based on the simple moving average (SMA) is suggested for testing if the number of performed simulations is sufficient. Results. The bot played all 6000 evaluated levels in less than ten minutes. The method based on the SMA showed that the number of simulations was sufficiently large. Conclusions. It has been shown that the automatic method is much more efficient than the traditional evaluation made by humans while being consistent with human assessments.
26

[en] PESSIMISTIC Q-LEARNING: AN ALGORITHM TO CREATE BOTS FOR TURN-BASED GAMES / [pt] Q-LEARNING PESSIMISTA: UM ALGORITMO PARA GERAÇÃO DE BOTS DE JOGOS EM TURNOS

ADRIANO BRITO PEREIRA 25 January 2017 (has links)
[pt] Este documento apresenta um novo algoritmo de aprendizado por reforço, o Q-Learning Pessimista. Nossa motivação é resolver o problema de gerar bots capazes de jogar jogos baseados em turnos e contribuir para obtenção de melhores resultados através dessa extensão do algoritmo Q-Learning. O Q-Learning Pessimista explora a flexibilidade dos cálculos gerados pelo Q-Learning tradicional sem a utilização de força bruta. Para medir a qualidade do bot gerado, consideramos qualidade como a soma do potencial de vitória e empate em um jogo. Nosso propósito fundamental é gerar bots de boa qualidade para diferentes jogos. Desta forma, podemos utilizar este algoritmo para famílias de jogos baseados em turno. Desenvolvemos um framework chamado Wisebots e realizamos experimentos com alguns cenários aplicados aos seguintes jogos tradicionais: TicTacToe, Connect-4 e CardPoints. Comparando a qualidade do Q-Learning Pessimista com a do Q-Learning tradicional, observamos ganhos de 0,8 por cento no TicTacToe, obtendo um algoritmo que nunca perde. Observamos também ganhos de 35 por cento no Connect-4 e de 27 por cento no CardPoints, elevando ambos da faixa de 50 por cento a 60 por cento para 90 por cento a 100 por cento de qualidade. Esses resultados ilustram o potencial de melhoria com o uso do Q-Learning Pessimista, sugerindo sua aplicação aos diversos tipos de jogos de turnos. / [en] This document presents a new algorithm for reinforcement learning method, Q-Learning Pessimistic. Our motivation is to resolve the problem of generating bots able to play turn-based games and contribute to achieving better results through this extension of the Q-Learning algorithm. The Q-Learning Pessimistic explores the flexibility of the calculations generated by the traditional Q-learning without the use of force brute. To measure the quality of bot generated, we consider quality as the sum of the potential to win and tie in a game. Our fundamental purpose, is to generate bots with good quality for different games. Thus, we can use this algorithm to families of turn-based games. We developed a framework called Wisebots and conducted experiments with some scenarios applied to the following traditional games TicTacToe, Connect-4 and CardPoints. Comparing the quality of Pessimistic Q-Learning with the traditional Q-Learning, we observed gains to 100 per cent in the TicTacToe, obtaining an algorithm that never loses. Also observed in 35 per cent gains Connect-4 and 27 per cent in CardPoints, increasing both the range of 60 per cent to 80 per cent for 90 per cent to 100 per cent of quality. These results illustrate the potential for improvement with the use of Q-Learning Pessimistic, suggesting its application to various types of games.
27

“It Doesn’t Matter Now Who’s Right and Who’s Not:” A Model To Evaluate and Detect Bot Behavior on Twitter

Bowen, Braeden 14 June 2021 (has links)
No description available.
28

Twitter and social bots : an analysis of the 2021 Canadian election

Desrosiers-Brisebois, Gabrielle 12 1900 (has links)
Les médias sociaux sont désormais des outils de communication incontournables, notamment lors de campagnes électorales. La prévalence de l’utilisation de plateformes de communication en ligne suscite néanmoins des inquiétudes au sein des démocraties occidentales quant aux risques de manipulation des électeurs, notamment par le biais de robots sociaux. Les robots sociaux sont des comptes automatisés qui peuvent être utilisés pour produire ou amplifier le contenu en ligne tout en se faisant passer pour de réels utilisateurs. Certaines études, principalement axées sur le cas des États-Unis, ont analysé la propagation de contenus de désinformation par les robots sociaux en période électorale, alors que d’autres ont également examiné le rôle de l’affiliation partisane sur les comportements et les tactiques favorisées par les robots sociaux. Toutefois, la question à savoir si l'orientation partisane des robots sociaux a un impact sur la quantité de désinformation politique qu’ils propagent demeure sans réponse. Par conséquent, l’objectif principal de ce travail de recherche est de déterminer si des différences partisanes peuvent être observées dans (i) le nombre de robots sociaux actifs pendant la campagne électorale canadienne de 2021, (ii) leurs interactions avec les comptes réels, et (iii) la quantité de contenu de désinformation qu’ils ont propagé. Afin d’atteindre cet objectif de recherche, ce mémoire de maîtrise s’appuie sur un ensemble de données Twitter de plus de 11,3 millions de tweets en anglais provenant d’environ 1,1 million d'utilisateurs distincts, ainsi que sur divers modèles pour distinguer les comptes de robots sociaux des comptes humains, déterminer l’orientation partisane des utilisateurs et détecter le contenu de désinformation politique véhiculé. Les résultats de ces méthodes distinctes indiquent des différences limitées dans le comportement des robots sociaux lors des dernières élections fédérales. Il a tout de même été possible d'observer que les robots sociaux de tendance conservatrice étaient plus nombreux que leurs homologues de tendance libérale, mais que les robots sociaux d’orientation libérale étaient ceux qui ont interagi le plus avec les comptes authentiques par le biais de retweets et de réponses directes, et qui ont propagé le plus de contenu de désinformation. / Social media have now become essential communication tools, including within the context of electoral campaigns. However, the prevalence of online communication platforms has raised concerns in Western democracies about the risks of voter manipulation, particularly through social bot accounts. Social bots are automated computer algorithms which can be used to produce or amplify online content while posing as authentic users. Some studies, mostly focused on the case of the United States, analyzed the propagation of disinformation content by social bots during electoral periods, while others have also examined the role of partisanship on social bots’ behaviors and activities. However, the question of whether social bots’ partisan-leaning impacts the amount of political disinformation content they generate online remains unanswered. Therefore, the main goal of this study is to determine whether partisan differences could be observed in (i) the number of active social bots during the 2021 Canadian election campaign, (ii) their interactions with humans, and (iii) the amount of disinformation content they propagated. In order to reach this research objective, this master’s thesis relies on an original Twitter dataset of more than 11.3 million English tweets from roughly 1.1 million distinct users, as well as diverse models to distinguish between social bot and human accounts, determine the partisan-leaning of users, and detect political disinformation content. Based on these distinct methods, the results indicate limited differences in the behavior of social bots in the 2021 federal election. It was however possible to observe that conservative-leaning social bots were more numerous than their liberal-leaning counterparts, but liberal-leaning accounts were those who interacted more with authentic accounts through retweets and replies and shared the most disinformation content.
29

[en] ALUMNI TOOL: INFORMATION RECOVERY OF PERSONAL DATA ON THE WEB IN AUTHENTICATED SOCIAL NETWORKS / [pt] ALUMNI TOOL: RECUPERAÇÃO DE DADOS PESSOAIS NA WEB EM REDES SOCIAIS AUTENTICADAS

LUIS GUSTAVO ALMEIDA 02 August 2018 (has links)
[pt] O uso de robôs de busca para coletar informações para um determinado contexto sempre foi um problema desafiante e tem crescido substancialmente nos últimos anos. Por exemplo, robôs de busca podem ser utilizados para capturar dados de redes sociais profissionais. Em particular, tais redes permitem estudar as trajetórias profissionais dos egressos de uma universidade, e responder diversas perguntas, como por exemplo: Quanto tempo um ex-aluno da PUC-Rio leva para chegar a um cargo de relevância? No entanto, um problema de natureza comum a este cenário é a impossibilidade de coletar informações devido a sistemas de autenticação, impedindo um robô de busca de acessar determinadas páginas e conteúdos. Esta dissertação aborda uma solução para capturar dados, que contorna o problema de autenticação e automatiza o processo de coleta de dados. A solução proposta coleta dados de perfis de usuários de uma rede social profissional para armazenamento em banco de dados e posterior análise. A dissertação contempla ainda a possibilidade de adicionar diversas outras fontes de dados dando ênfase a uma estrutura de armazém de dados. / [en] The use of search bots to collect information for a given context has grown substantially in recent years. For example, search bots may be used to capture data from professional social networks. In particular, such social networks facilitate studying the professional trajectory of the alumni of a given university, and answer several questions such as: How long does a former student of PUC-Rio take to arrive at a management position? However, a common problem in this scenario is the inability to collect information due to authentication systems, preventing a search robot from accessing certain pages and content. This dissertation addresses a solution to capture data, which circumvents the authentication problem and automates the data collection process. The proposed solution collects data from user profiles for later database storage and analysis. The dissertation also contemplates the possibility of adding several other sources of data giving emphasis to a data warehouse structure.
30

Mapping posthuman discourse and the evolution of living information

Swift, Adam Glen January 2006 (has links)
The discourse that surrounds and constitutes the post-human emerged as a response to earlier claims of an essential or universal human or human nature. These discussions claim that the human is a discursive construct that emerges from various configurations of nature, embodiment, technology, and culture, configurations that have also been variously shaped by the forces of social history. And in the absence of an essential human figure, post-human discourses suggest that there are no restrictions or limitations on how the human can be reconfigured. This axiom has been extended in light of a plethora of technological reconfigurations and augmentations now potentially available to the human, and claims emerge from within this literature that these new technologies constitute a range of possibilities for future human biological evolution. This thesis questions the assumption contained within these discourses that technological incursions or reconfigurations of the biological human necessarily constitute human biological or human social evolution by discussing the role the evolution theories plays in our understanding of the human, the social, and technology. In this thesis I show that, in a reciprocal process, evolution theory draws metaphors from social institutions and ideologies, while social institutions and ideologies simultaneously draw on metaphors from evolution theory. Through this discussion, I propose a form of evolution literacy; a tool, I argue, is warranted in developing a sophisticated response to changes in both human shape and form. I argue that, as a whole, our understanding of evolution constitutes a metanarrative, a metaphor through which we understand the place of the human within the world; it follows that historical shifts in social paradigms will result in new definitions of evolution. I show that contemporary evolution theory reflects parts of the world as codified informatic systems of associated computational network logic through which the behaviour of participants is predefined according to an evolved or programmed structure. Working from within the discourse of contemporary evolution theory I develop a space through which a version of the post-human figure emerges. I promote this version of the post-human as an Artificial Intelligence computational programme or autonomous agent that, rather than seeking to replace, reduce or deny the human subject, is configured as an exosomatic supplement to and an extension of the biological human.

Page generated in 0.03 seconds