Spelling suggestions: "subject:"human computational."" "subject:"suman computational.""
1 |
Insufficient Effort Responding on Mturk Surveys: Evidence-Based Quality Control for Organizational ResearchCyr, Lee 17 July 2018 (has links)
Each year, crowdsourcing organizational research grows increasingly popular. However, this source of sampling receives much scrutiny focused on data quality and related research methods. Specific to the present research, survey attentiveness poses a unique dilemma. Research on updated conceptualizations of attentiveness--insufficient effort responding (IER)--shows that it carries substantial concerns for data quality beyond random noise, which further warrants deleting inattentive participants. However, personal characteristics predict IER, so deleting data may cause sampling bias. Therefore, preventing IER becomes paramount, but research seems to ignore whether IER prevention itself may create systematic error. This study examines the detection and prevention of IER in Amazon's Mechanical Turk (Mturk) by evaluating three IER detection methods pertinent to concerns of attentiveness on the platform and using two, promising, IER prevention approaches--Mturk screening features and IER preventive warning messages. I further consider how these issues relate to organizational research and answer the call for a more nuanced understanding of the Mturk population by focusing on psychological phenomena often studied/measured in organizational literature--the congruency effect and approach-avoidance motivational theories, Big Five personality, positive and negative affectivity, and core self-evaluations. I collected survey data from screened and non-screened samples and manipulated warning messages using four conditions--no warning, gain-framed, loss-framed, and combined-framed messages. I used logistic regression to compare the prevalence of IER across conditions and the effectiveness of warning messages given positively or negatively valenced motivational tendencies. I also used 4x2 factorial ANCOVAs to test for differences in personal characteristics across conditions. The sample consisted of 1071 Mturk workers (turkers). Results revealed differences in IER prevalence among detection methods and between prevention conditions, counter-intuitive results for congruency effects and motivational theories, and differences across conditions for agreeableness, conscientiousness, and positive and negative affectivity. Implications, future research, and recommendations are discussed.
|
2 |
Software architecture for language engineeringCunningham, Hamish January 2000 (has links)
No description available.
|
3 |
Accurate and budget-efficient text, image, and video analysis systems powered by the crowdSameki, Mehrnoosh 22 February 2018 (has links)
Crowdsourcing systems empower individuals and companies to outsource labor-intensive tasks that cannot currently be solved by automated methods and are expensive to tackle by domain experts. Crowdsourcing platforms are traditionally used to provide training labels for supervised machine learning algorithms. Crowdsourced tasks are distributed among internet workers who typically have a range of skills and knowledge, differing previous exposure to the task at hand, and biases that may influence their work. This inhomogeneity of the workforce makes the design of accurate and efficient crowdsourcing systems challenging. This dissertation presents solutions to improve existing crowdsourcing systems in terms of accuracy and efficiency. It explores crowdsourcing tasks in two application areas, political discourse and annotation of biomedical and everyday images. The first part of the dissertation investigates how workers' behavioral factors and their unfamiliarity with data can be leveraged by crowdsourcing systems to control quality. Through studies that involve familiar and unfamiliar image content, the thesis demonstrates the benefit of explicitly accounting for a worker's familiarity with the data when designing annotation systems powered by the crowd. The thesis next presents Crowd-O-Meter, a system that automatically predicts the vulnerability of crowd workers to believe \enquote{fake news} in text and video. The second part of the dissertation explores the reversed relationship between machine learning and crowdsourcing by incorporating machine learning techniques for quality control of crowdsourced end products. In particular, it investigates if machine learning can be used to improve the quality of crowdsourced results and also consider budget constraints. The thesis proposes an image analysis system called ICORD that utilizes behavioral cues of the crowd worker, augmented by automated evaluation of image features, to infer the quality of a worker-drawn outline of a cell in a microscope image dynamically. ICORD determines the need to seek additional annotations from other workers in a budget-efficient manner. Next, the thesis proposes a budget-efficient machine learning system that uses fewer workers to analyze easy-to-label data and more workers for data that require extra scrutiny. The system learns a mapping from data features to number of allocated crowd workers for two case studies, sentiment analysis of twitter messages and segmentation of biomedical images. Finally, the thesis uncovers the potential for design of hybrid crowd-algorithm methods by describing an interactive system for cell tracking in time-lapse microscopy videos, based on a prediction model that determines when automated cell tracking algorithms fail and human interaction is needed to ensure accurate tracking.
|
4 |
Human computation appliqué au trading algorithmique / Human computation applied to algorithmic tradingVincent, Arnaud 14 November 2013 (has links)
Le trading algorithmique utilisé à des fins spéculatives a pris un véritable essor depuis les années 2000, en optimisant d'abord l'exécution sur les marchés d'ordres issus de décisions humaines d'arbitrage ou d'investissement, puis en exécutant une stratégie d'investissement pré-programmée ou systématique où l'humain est cantonné au rôle de concepteur et de superviseur. Et ce, malgré les mises en garde des partisans de l'Efficient Market Hypothesis (EMH) qui indiquent que pourvu que le marché soit efficient, la spéculation est vaine.Le Human Computation (HC) est un concept singulier, il considère le cerveau humain comme le composant unitaire d'une machine plus vaste, machine qui permettrait d'adresser des problèmes d'une complexité hors de portée des calculateurs actuels. Ce concept est à la croisée des notions d'intelligence collective et des techniques de Crowdsourcing permettant de mobiliser des humains (volontaires ou non, conscients ou non, rémunérés ou non) dans la résolution d'un problème ou l'accomplissement d'une tâche complexe. Le projet Fold-it en biochimie est ainsi venu apporter la preuve indiscutable de la capacité de communautés humaines à constituer des systèmes efficaces d'intelligence collective, sous la forme d'un serious game en ligne.Le trading algorithmique pose des difficultés du même ordre que celles rencontrées par les promoteurs de Fold-it et qui les ont conduits à faire appel à la CPU humaine pour progresser de façon significative. La question sera alors de savoir où et comment utiliser le HC dans une discipline qui se prête très mal à la modélisation 3D ou à l'approche ludique afin d'en mesurer l'efficacité.La qualification et la transmission de l'information par réseaux sociaux visant à alimenter un système de trading algorithmique et fondé sur ce principe de HC constituent la première expérimentation de cette thèse. L'expérimentation consistera à analyser en temps réel le buzz Twitter à l'aide de deux méthodes différentes, une méthode asémantique qui cible les événements inattendus remontés par le réseau Twitter (comme l'éruption du volcan islandais en 2010) et une méthode sémantique plus classique qui cible des thématiques connues et anxiogènes pour les marchés financiers. On observe une amélioration significative des performances des algorithmes de trading uniquement sur les stratégies utilisant les données de la méthode asémantique.La deuxième expérimentation de HC dans la sphère du trading algorithmique consiste à confier l'optimisation de paramètres de stratégies de trading à une communauté de joueurs, dans une démarche inspirée du jeu Fold-it. Dans le jeu en ligne baptisé Krabott, chaque solution prend la forme d'un brin d'ADN, les joueurs humains sont alors sollicités dans les phases de sélection et de reproduction des individus-solutions.Krabott démontre la supériorité des utilisateurs humains sur la machine dans leurs capacités d'exploration et leurs performances moyennes quelle que soit la façon dont on compare les résultats. Ainsi, une foule de plusieurs centaines de joueurs surperforme systématiquement la machine sur la version Krabott V2 et sur l'année 2012, résultats confirmés avec d'autres joueurs sur la version Krabott V3 en 2012-2013. Fort de ce constat, il devient possible de construire un système de trading hybride homme-machine sur la base d'une architecture de HC où chaque joueur est la CPU d'un système global de trading.La thèse conclut sur l'avantage compétitif qu'offrirait la mise en œuvre d'une architecture de HC à la fois sur l'acquisition de données alimentant les algorithmes de trading et sur la capacité d'un tel système à optimiser les paramètres de stratégies existantes. Il est pertinent de parier à terme sur la capacité de la foule à concevoir et à maintenir de façon autonome des stratégies de trading algorithmique, dont la complexité finirait par échapper totalement à la compréhension humaine individuelle. / Algorithmic trading, designed for speculative purposes, really took off in the early 2000's, first for optimizing market orders based on human decisions and then for executing trading strategies in real time. In this systematic trading approach, human intervention is limited to system supervision and maintenance. The field is growing even though the Efficient Market Hypothesis says that in an efficient market, speculation is futile.Human Computation is an unusual concept which considers human brains as a part of a much larger machine, with the power to tackle problems that are too big for today's computers. This concept is at the crossroads between two older ideas: collective intelligence and crowdsourcing able to involve humans (whether they are paid or not, they realize it or not) in problem solving or to achieve a complex task. The Fold-it project in biochemistry proved the ability of a human community to set up an efficient collective intelligence system based on a serious online game.Algorithmic trading is on same difficulty level of complexity as the problem tackled by Fold-it's creators. In that case “human CPU” really helped in solving 3D puzzles. The question is whether Human Computation could be used in algorithmic trading even though there are no 3D structures or user-friendly puzzles to deal with.The first experiment in this thesis is based on the idea that information flows in social media may provide input to algorithmic trading systems based on Human Computation principles. Twitter, the micro blogging platform, was chosen in order to track (1) words that may have an impact of financial markets and (2) unexpected events such as the eruption of the Icelandic volcano. We demonstrate that a significant increase in P&L can be achieved in the second case by treating the unexpected events as alerts.The second experiment with Human Computation in algorithmic trading aims to get a community of internet users to optimize parameters of the trading strategies, in the way that the Fold-it game did. In this online game called “Krabott” solutions are presented as friendly virtual bots each containing a specific set of parameters for a particular trading strategy in its DNA. Humans who are playing the game, interact in the selection and reproduction steps for each new “Krabott”.In this game the Krabotts “bred” by players outperformed those resulting from a computer optimization process. We tested two different versions of Krabott during the years 2012 and 2013, and in both cases the population bred by the players outperformed the “computer only” ones. This suggests that it may be possible to set up a whole hybrid human-computer system based on Human Computation where each player is a kind of single CPU within a global trading system.The thesis concludes by discussing the types of competitive advantages that structures based on Human Computation have for data acquisition into a trading system or for optimizing the parameters of existing trading strategies. Going further we expect that in the years to come Human Computation will be able to set up and update algorithmic trading strategies, whose complexity exceeds what an individual person could comprehend.
|
5 |
An evaluation paradigm for spoken dialog systems based on crowdsourcing and collaborative filtering.January 2011 (has links)
Yang, Zhaojun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 92-99). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- SDS Architecture --- p.1 / Chapter 1.2 --- Dialog Model --- p.3 / Chapter 1.3 --- SDS Evaluation --- p.4 / Chapter 1.4 --- Thesis Outline --- p.7 / Chapter 2 --- Previous Work --- p.9 / Chapter 2.1 --- Approaches to Dialog Modeling --- p.9 / Chapter 2.1.1 --- Handcrafted Dialog Modeling --- p.9 / Chapter 2.1.2 --- Statistical Dialog Modeling --- p.12 / Chapter 2.2 --- Evaluation Metrics --- p.16 / Chapter 2.2.1 --- Subjective User Judgments --- p.17 / Chapter 2.2.2 --- Interaction Metrics --- p.18 / Chapter 2.3 --- The PARADISE Framework --- p.19 / Chapter 2.4 --- Chapter Summary --- p.22 / Chapter 3 --- Implementation of a Dialog System based on POMDP --- p.23 / Chapter 3.1 --- Partially Observable Markov Decision Processes (POMDPs) --- p.24 / Chapter 3.1.1 --- Formal Definition --- p.24 / Chapter 3.1.2 --- Value Iteration --- p.26 / Chapter 3.1.3 --- Point-based Value Iteration --- p.27 / Chapter 3.1.4 --- A Toy Example of POMDP: The NaiveBusInfo System --- p.27 / Chapter 3.2 --- The SDS-POMDP Model --- p.31 / Chapter 3.3 --- Composite Summary Point-based Value Iteration (CSPBVI) --- p.33 / Chapter 3.4 --- Application of SDS-POMDP Model: The Buslnfo System --- p.35 / Chapter 3.4.1 --- System Description --- p.35 / Chapter 3.4.2 --- Demonstration Description --- p.39 / Chapter 3.5 --- Chapter Summary --- p.42 / Chapter 4 --- Collecting User Judgments on Spoken Dialogs with Crowdsourcing --- p.46 / Chapter 4.1 --- Dialog Corpus and Automatic Dialog Classification --- p.47 / Chapter 4.2 --- User Judgments Collection with Crowdsourcing --- p.50 / Chapter 4.2.1 --- HITs on Dialog Evaluation --- p.51 / Chapter 4.2.2 --- HITs on Inter-rater Agreement --- p.53 / Chapter 4.2.3 --- Approval of Ratings --- p.54 / Chapter 4.3 --- Collected Results and Analysis --- p.55 / Chapter 4.3.1 --- Approval Rates and Comments from Mturk Workers --- p.55 / Chapter 4.3.2 --- Consistency between Automatic Dialog Classification and Manual Ratings --- p.57 / Chapter 4.3.3 --- Inter-rater Agreement Among Workers --- p.60 / Chapter 4.4 --- Comparing Experts to Non-experts --- p.64 / Chapter 4.4.1 --- Inter-rater Agreement on the Let's Go! System --- p.65 / Chapter 4.4.2 --- Consistency Between Expert and Non-expert Annotations on SDC Systems --- p.66 / Chapter 4.5 --- Chapter Summary --- p.68 / Chapter 5 --- Collaborative Filtering for Performance Prediction --- p.70 / Chapter 5.1 --- Item-Based Collaborative Filtering --- p.71 / Chapter 5.2 --- CF Model for User Satisfaction Prediction --- p.72 / Chapter 5.2.1 --- ICFM for User Satisfaction Prediction --- p.72 / Chapter 5.2.2 --- Extended ICFM for User Satisfaction Prediction --- p.73 / Chapter 5.3 --- Extraction of Interaction Features --- p.74 / Chapter 5.4 --- Experimental Results and Analysis --- p.76 / Chapter 5.4.1 --- Prediction of User Satisfaction --- p.76 / Chapter 5.4.2 --- Analysis of Prediction Results --- p.79 / Chapter 5.5 --- Verifying the Generalibility of CF Model --- p.81 / Chapter 5.6 --- Evaluation of The Buslnfo System --- p.86 / Chapter 5.7 --- Chapter Summary --- p.87 / Chapter 6 --- Conclusions and Future Work --- p.89 / Chapter 6.1 --- Thesis Summary --- p.89 / Chapter 6.2 --- Future Work --- p.90 / Bibliography --- p.92
|
6 |
Attribute Learning using Joint Human and Machine ComputationLaw, Edith L.M. 01 August 2012 (has links)
This thesis is centered around the problem of attribute learning -- using the joint effort of humans and machines to describe objects, e.g., determining that a piece of music is "soothing," that the bird in an image "has a red beak", or that Ernest Hemingway is an "Nobel Prize winning author." In this thesis, we present new methods for solving the attribute-learning problem using the joint effort of the crowd and machines via human computation games.
When creating a human computation system, typically two design objectives need to be simultaneously satisfied. The first objective is human-centric -- the task prescribed by the system must be intuitive, appealing and easy to accomplish for human workers. The second objective is task-centric -- the system must actually perform the task at hand. These two goals are often at odds with each other, especially in the casual game setting. This thesis shows that human computation games can accomplish both the human-centric and task-centric objectives, if we first design for humans, then devise machine learning algorithms to work around the limitations of human workers and complement their abilities in order to jointly accomplish the task of learning attributes. We demonstrate the effectiveness of our approach in three concrete problem settings: music tagging, bird image classification and noun phrase categorization.
Contributions of this thesis include a framework for attribute learning, two new game mechanisms, experiments showing the effectiveness of the hybrid human and machine computation approach for learning attributes in vocabulary-rich settings and under the constraints of knowledge limitations, as well as deployed games played by tens of thousands of people, generating large datasets for machine learning.
|
7 |
Text data analysis for a smart city project in a developing nationCurrin, Aubrey Jason January 2015 (has links)
Increased urbanisation against the backdrop of limited resources is complicating city planning and management of functions including public safety. The smart city concept can help, but most previous smart city systems have focused on utilising automated sensors and analysing quantitative data. In developing nations, using the ubiquitous mobile phone as an enabler for crowdsourcing of qualitative public safety reports, from the public, is a more viable option due to limited resources and infrastructure limitations. However, there is no specific best method for the analysis of qualitative text reports for a smart city in a developing nation. The aim of this study, therefore, is the development of a model for enabling the analysis of unstructured natural language text for use in a public safety smart city project. Following the guidelines of the design science paradigm, the resulting model was developed through the inductive review of related literature, assessed and refined by observations of a crowdsourcing prototype and conversational analysis with industry experts and academics. The content analysis technique was applied to the public safety reports obtained from the prototype via computer assisted qualitative data analysis software. This has resulted in the development of a hierarchical ontology which forms an additional output of this research project. Thus, this study has shown how municipalities or local government can use CAQDAS and content analysis techniques to prepare large quantities of text data for use in a smart city.
|
8 |
#Crowdwork4dev:Engineering Increases in Crowd Labor Demand to Increase the Effectiveness of Crowd Work as a Poverty-Reduction ToolSchriner, Andrew W. 20 October 2015 (has links)
No description available.
|
9 |
My4Sight: A Human Computation Platform for Improving Flu PredictionsAkupatni, Vivek Bharath 17 September 2015 (has links)
While many human computation (human-in-the-loop) systems exist in the field of Artificial Intelligence (AI) to solve problems that can't be solved by computers alone, comparatively fewer platforms exist for collecting human knowledge, and evaluation of various techniques for harnessing human insights in improving forecasting models for infectious diseases, such as Influenza and Ebola.
In this thesis, we present the design and implementation of My4Sight, a human computation system developed to harness human insights and intelligence to improve forecasting models. This web-accessible system simplifies the collection of human insights through the careful design of the following two tasks: (i) asking users to rank system-generated forecasts in order of likelihood; and (ii) allowing users to improve upon an existing system-generated prediction. The structured output collected from querying human computers can then be used in building better forecasting models. My4Sight is designed to be a complete end-to- end analytical platform, and provides access to data collection features and statistical tools that are applied to the collected data. The results are communicated to the user, wherever applicable, in the form of visualizations for easier data comprehension. With My4Sight, this thesis makes a valuable contribution to the field of epidemiology by providing the necessary data and infrastructure platform to improve forecasts in real time by harnessing the wisdom of the crowd. / Master of Science
|
10 |
Mosquito popper: a multiplayer online game for 3D human body scan data segmentationNolte, Zachary 01 May 2017 (has links)
Game with a purpose (GWAP) is a concept that aims to utilize the hours spent in the world playing video games by everyday people to yield valuable data. The main objective of this research is to prove the feasibility of using the concept of GWAP for the segmentation and labeling of massive amount of 3D human body scan data. The rationale behind using GWAP as a method for mesh segmentation and labeling is that the current methods use expensive, time consuming computational algorithms to accomplish this task. Furthermore, the computer algorithms are not as detailed and specific as what natural human ability can achieve in segmentation tasks. The method presented in this paper overcomes the shortcomings of computer algorithms by introducing the concept of GWAP for human model segmentation. The actual process of segmenting and labeling the mesh becomes a form of entertainment rather than a tedious process, from which segmentation data is produced as a bi-product. In addition, the natural capabilities of the human visual processing systems are harnessed to identify and label various parts of the 3D human body shape, which in turn gives more details and specificity in segmentation. The effectiveness of the proposed game play mechanism is proven by experiments conducted in this study.
|
Page generated in 0.1082 seconds