• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 9
  • 6
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Ontology Population Using Human Computation

Evirgen, Gencay Kemal 01 January 2010 (has links) (PDF)
In recent years, many researchers have developed new techniques on ontology population. However, these methods cannot overcome the semantic gap between humans and the extracted ontologies. Words-Around is a web application that forms a user-friendly environment which channels the vast Internet population to provide data towards solving ontology population problem that no known efficient computer algorithms can yet solve. This application&rsquo / s fundamental data structure is a list of words that people naturally link to each other. It displays these lists as a word cloud that is fun to drag around and play with. Users are prompted to enter whatever word comes to their mind upon seeing a word that is suggested from the application&rsquo / s database / or they can search for one word in particular to see what associations other users have made to it. Once logged in, users can view their activity history, which words they were the first to associate, and mark particular words as misspellings or as junk, to help keep the list&rsquo / s structure to be relevant and accurate. The results of this implementation indicate the fact that an interesting application that enables users just to play with its visual elements can also be useful to gather information.
12

Facilitating the authoring of multimedia social problem solving skills instructional modules

Boujarwah, Fatima Abdulazeez 02 April 2012 (has links)
Difficulties in social skills are generally considered defining characteristics of High-Functioning Autism (HFA). These difficulties interfere with the educational experiences and quality of life of individuals with HFA, and interventions must be highly individualized to be effective. I explore ways technologies may play a role in assisting individuals with the acquisition of social problem solving skills. This thesis presents the design, development, and evaluation of two systems; Refl-ex, which is a collection of multimedia instructional modules designed to enable adolescents with HFA to practice social problem solving skills, and REACT, a system to facilitate the authoring of a wider variety of instructional modules. The authoring tool is designed to help parents, teachers, and other stakeholders to create Refl-ex-like instructional modules. The approach uses models of social knowledge created using crowdsourcing techniques to provide the authors with support throughout the authoring process. A series of studies were conducted to inform the design of high-fidelity prototypes of each of the systems and to evaluate the prototypes. The contributions of this thesis are: 1) the creation of obstacle-based branching, an approach to developing interactive social skills instructional modules that has been evaluated by experts to be an improvement to current practices; 2) the development of an approach to building models of social knowledge that can be dynamically created and expanded using crowdsourcing; and 3) the development a system that gives parents and other caregivers the ability to easily create customized social skills instructional modules for their children and students.
13

Applying human computation methods to information science

Harris, Christopher Glenn 01 December 2013 (has links)
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on a few selected areas of information science. This thesis contributes to the field of human computation as it applies to areas of information science, particularly information retrieval (IR). We begin by discussing the merits and limitations of applying crowdsourcing and game-based approaches to information science. We then develop a framework that examines the value of using crowdsourcing and game mechanisms to each step of an IR model. We identify three areas of the IR model that our framework indicates are likely to benefit from the application of human computation methods: acronym identification and resolution, relevance assessment, and query formulation. We conduct experiments that employ human computation methods, evaluate the benefits of these methods and report our findings. We conclude that employing human computation methods such as crowdsourcing and games, can improve the accuracy of many tasks currently being done by machine methods alone. We demonstrate that the best results can be achieved when human computation methods augment computer-based IR processes, providing an extra level of skills, abilities, and knowledge that computers cannot easily replicate.
14

WiSDM: a platform for crowd-sourced data acquisition, analytics, and synthetic data generation

Choudhury, Ananya 15 August 2016 (has links)
Human behavior is a key factor influencing the spread of infectious diseases. Individuals adapt their daily routine and typical behavior during the course of an epidemic -- the adaptation is based on their perception of risk of contracting the disease and its impact. As a result, it is desirable to collect behavioral data before and during a disease outbreak. Such data can help in creating better computer models that can, in turn, be used by epidemiologists and policy makers to better plan and respond to infectious disease outbreaks. However, traditional data collection methods are not well suited to support the task of acquiring human behavior related information; especially as it pertains to epidemic planning and response. Internet-based methods are an attractive complementary mechanism for collecting behavioral information. Systems such as Amazon Mechanical Turk (MTurk) and online survey tools provide simple ways to collect such information. This thesis explores new methods for information acquisition, especially behavioral information that leverage this recent technology. Here, we present the design and implementation of a crowd-sourced surveillance data acquisition system -- WiSDM. WiSDM is a web-based application and can be used by anyone with access to the Internet and a browser. Furthermore, it is designed to leverage online survey tools and MTurk; WiSDM can be embedded within MTurk in an iFrame. WiSDM has a number of novel features, including, (i) ability to support a model-based abductive reasoning loop: a flexible and adaptive information acquisition scheme driven by causal models of epidemic processes, (ii) question routing: an important feature to increase data acquisition efficacy and reduce survey fatigue and (iii) integrated surveys: interactive surveys to provide additional information on survey topic and improve user motivation. We evaluate the framework's performance using Apache JMeter and present our results. We also discuss three other extensions of WiSDM: Adapter, Synthetic Data Generator, and WiSDM Analytics. The API Adapter is an ETL extension of WiSDM which enables extracting data from disparate data sources and loading to WiSDM database. The Synthetic Data Generator allows epidemiologists to build synthetic survey data using NDSSL's Synthetic Population as agents. WiSDM Analytics empowers users to perform analysis on the data by writing simple python code using Versa APIs. We also propose a data model that is conducive to survey data analysis. / Master of Science
15

Enabling Machine Science through Distributed Human Computing

Wagy, Mark David 01 January 2016 (has links)
Distributed human computing techniques have been shown to be effective ways of accessing the problem-solving capabilities of a large group of anonymous individuals over the World Wide Web. They have been successfully applied to such diverse domains as computer security, biology and astronomy. The success of distributed human computing in various domains suggests that it can be utilized for complex collaborative problem solving. Thus it could be used for "machine science": utilizing machines to facilitate the vetting of disparate human hypotheses for solving scientific and engineering problems. In this thesis, we show that machine science is possible through distributed human computing methods for some tasks. By enabling anonymous individuals to collaborate in a way that parallels the scientific method -- suggesting hypotheses, testing and then communicating them for vetting by other participants -- we demonstrate that a crowd can together define robot control strategies, design robot morphologies capable of fast-forward locomotion and contribute features to machine learning models for residential electric energy usage. We also introduce a new methodology for empowering a fully automated robot design system by seeding it with intuitions distilled from the crowd. Our findings suggest that increasingly large, diverse and complex collaborations that combine people and machines in the right way may enable problem solving in a wide range of fields.
16

Computational Environment Design

Zhang, Haoqi 26 October 2012 (has links)
The Internet has evolved into a platform on which large numbers of individuals take action and join in collaborations via crowdsourcing, social media, and electronic commerce. When designing social and economic systems on the Internet, a key challenge is understanding how to promote particular desired behaviors and outcomes. I call this problem computational environment design. Notable abilities afforded by the Internet, such as the ability to recruit large numbers of individuals to join problem-solving efforts via crowdsourcing and social media, and the ability to engage in a data-driven iterative design process, are creating new opportunities and inspiring new methods for computational environment design. This dissertation focuses on these abilities and proposes an approach for arriving at effective designs by reasoning and learning about characteristics of participants and how these characteristics interact with a system’s design to influence behavior. The dissertation consists of two major components. The first component focuses on designing crowdsourcing and human computation systems that leverage a crowd to solve complex problems that require effective coordination among participants or the recruitment of individuals with relevant expertise. I show how reasoning about crowd abilities and limitations can lead to designs that make crowdsourcing complex tasks feasible, effective, and efficient. The solutions introduce new design patterns and methods for human computation and crowdsourcing; notable contributions include a crowdware design for tackling human computation tasks with global constraints, and incentive mechanisms for task routing that harness people’s expertise and social expertise by engaging them in both problem solving and routing. The second component focuses on understanding how to design effective environments automatically. I introduce a general active, indirect elicitation framework for automated environment design that learns relevant characteristics of participants based on observations of their behavior and optimizes designs based on learned models. Theoretical contributions include developing an active, indirect elicitation algorithm for a sequential decision-making setting that is guaranteed to discover effective designs after few interactions. Practical contributions include applications of the active, indirect elicitation framework to crowdsourcing. Specifically, I demonstrate how to automatically design tasks and synthesize workflows when optimizing for desired objectives given resource constraints. / Engineering and Applied Sciences
17

Human computation appliqué au trading algorithmique

Vincent, Arnaud 14 November 2013 (has links) (PDF)
Le trading algorithmique utilisé à des fins spéculatives a pris un véritable essor depuis les années 2000, en optimisant d'abord l'exécution sur les marchés d'ordres issus de décisions humaines d'arbitrage ou d'investissement, puis en exécutant une stratégie d'investissement pré-programmée ou systématique où l'humain est cantonné au rôle de concepteur et de superviseur. Et ce, malgré les mises en garde des partisans de l'Efficient Market Hypothesis (EMH) qui indiquent que pourvu que le marché soit efficient, la spéculation est vaine.Le Human Computation (HC) est un concept singulier, il considère le cerveau humain comme le composant unitaire d'une machine plus vaste, machine qui permettrait d'adresser des problèmes d'une complexité hors de portée des calculateurs actuels. Ce concept est à la croisée des notions d'intelligence collective et des techniques de Crowdsourcing permettant de mobiliser des humains (volontaires ou non, conscients ou non, rémunérés ou non) dans la résolution d'un problème ou l'accomplissement d'une tâche complexe. Le projet Fold-it en biochimie est ainsi venu apporter la preuve indiscutable de la capacité de communautés humaines à constituer des systèmes efficaces d'intelligence collective, sous la forme d'un serious game en ligne.Le trading algorithmique pose des difficultés du même ordre que celles rencontrées par les promoteurs de Fold-it et qui les ont conduits à faire appel à la CPU humaine pour progresser de façon significative. La question sera alors de savoir où et comment utiliser le HC dans une discipline qui se prête très mal à la modélisation 3D ou à l'approche ludique afin d'en mesurer l'efficacité.La qualification et la transmission de l'information par réseaux sociaux visant à alimenter un système de trading algorithmique et fondé sur ce principe de HC constituent la première expérimentation de cette thèse. L'expérimentation consistera à analyser en temps réel le buzz Twitter à l'aide de deux méthodes différentes, une méthode asémantique qui cible les événements inattendus remontés par le réseau Twitter (comme l'éruption du volcan islandais en 2010) et une méthode sémantique plus classique qui cible des thématiques connues et anxiogènes pour les marchés financiers. On observe une amélioration significative des performances des algorithmes de trading uniquement sur les stratégies utilisant les données de la méthode asémantique.La deuxième expérimentation de HC dans la sphère du trading algorithmique consiste à confier l'optimisation de paramètres de stratégies de trading à une communauté de joueurs, dans une démarche inspirée du jeu Fold-it. Dans le jeu en ligne baptisé Krabott, chaque solution prend la forme d'un brin d'ADN, les joueurs humains sont alors sollicités dans les phases de sélection et de reproduction des individus-solutions.Krabott démontre la supériorité des utilisateurs humains sur la machine dans leurs capacités d'exploration et leurs performances moyennes quelle que soit la façon dont on compare les résultats. Ainsi, une foule de plusieurs centaines de joueurs surperforme systématiquement la machine sur la version Krabott V2 et sur l'année 2012, résultats confirmés avec d'autres joueurs sur la version Krabott V3 en 2012-2013. Fort de ce constat, il devient possible de construire un système de trading hybride homme-machine sur la base d'une architecture de HC où chaque joueur est la CPU d'un système global de trading.La thèse conclut sur l'avantage compétitif qu'offrirait la mise en œuvre d'une architecture de HC à la fois sur l'acquisition de données alimentant les algorithmes de trading et sur la capacité d'un tel système à optimiser les paramètres de stratégies existantes. Il est pertinent de parier à terme sur la capacité de la foule à concevoir et à maintenir de façon autonome des stratégies de trading algorithmique, dont la complexité finirait par échapper totalement à la compréhension humaine individuelle.
18

目的性遊戲設計下具信賴驗證的行動景點資訊系統 / A mobile GWAP design for POI information system with confidence verification

唐聖傑, Tang, Sheng Jie Unknown Date (has links)
人智運算是近年來熱門的研究領域之ㄧ,隨著科技快速的進步手機硬體之體積不斷減少,功能不斷增加情形下,手機已經成為人類日常生活不可或缺的要素。因此我們希望利用手機來幫助我們透過遊戲的方式來達到我們的目的。 在這篇論文中我們在基於目的性遊戲(Games with a Purpose)的設計概念,在手機上開發系統並且達到收集資訊的目的。此系統讓使用者們可以樂於分享景點的資訊,其他玩家也可以共同來驗證由玩家們所上傳的景點資訊;而我們的目的則是要產生出可以信任的景點資訊提供給玩家做參觀景點的參考。 然而如何選擇目前最適當的景點資訊分派給玩家做驗證,才能使系統快速產生可信任之資訊。為了探討這個議題,我們提出了三個不同的演算法來分派任務給玩家驗證。並且作了一系列的模擬實驗來證明,我們提出的演算法確實提升系統產生可信任資訊之效能。最後我們將系統實作於智慧型手機上(Android Phone),並驗證了其結果與模擬實驗是一致的。 / Human Computation is one of the hottest research topics in recent years. At the same time, smart phones become indispensable with our daily life, due to advanced hardware technology. Therefore, to design a mobile game on smart phones to help us achieve the research purpose is our goal of this work. In this thesis, we adopt the approach of “Game with a Purpose “(GWAP) to collect the data that the system needs as users play a ”game”. The data here is Point-of-Interest (POI) information. Users share their POI information on the mobile system, and other players can verify it in order to come out with reliable POI information for further use. The research issue will be how to produce reliable information as quickly as possible, and also make sure the output is as correct as possible. The problem is how to assign appropriate information for a player to verify so as to improve the system performance. We proposed three different algorithms and went through a series of simulation experiments. The results showed that the Modify Lowest Disagree First and Highest Confidence Assignment (MLDF-HCA) algorithm performs best. We also implemented an Android App based on the algorithms. The real experiments did validate the simulation results.
19

Improving the Gameplay Experience and Guiding Bottom Players in an Interactive Mapping Game

Ambekar, Kiran 05 1900 (has links)
In game based learning, motivating the players to learn by providing them a desirable gameplay experience is extremely important. However, it's not an easy task considering the quality of today's commercial non-educational games. Throughout the gameplay, the player should neither get overwhelmed nor under-challenged. The best way to do so is to monitor the player's actions in the game because these actions can tell the reason behind the player's performance. They can also tell about the player's lacking competencies or knowledge. Based on this information, in-game educational interventions in the form of hints can be provided to the player. The success of such games depends on their interactivity, motivational outlook and thus player retention. UNTANGLED is an online mapping game based on crowd-sourcing, developed by Reconfigurable Computing Lab, UNT for the mapping problem of CGRAs. It is also an educational game for teaching the concepts of reconfigurable computing. This thesis performs qualitative comparative analysis on gameplays of low performing players of UNTANGLED. And the implications of this analysis are used to provide recommendations for improving the gameplay experience for these players by guiding them. The recommendations include strategies to reach a high score and a compact solution, hints in the form of preset patterns and a clustering based approach.
20

An Exploration of Bicyclist Comfort Levels Utilizing Crowdsourced Data

Blanc, Bryan Philip 24 September 2015 (has links)
Bicycle transportation has become a central priority of urban areas invested in improving sustainability, livability, and public health outcomes. Transportation agencies are striving to increase the comfort of their bicycle networks to improve the experience of existing cyclists and to attract new cyclists. The Oregon Department of Transportation sponsored the development of ORcycle, a smartphone application designed to collect cyclist travel, comfort, and safety information throughout Oregon. The sample resulting from the initial deployment of the application between November 2014 and March 2015 is described and analyzed within this thesis. 616 bicycle trips from 148 unique users were geo-matched to the Portland metropolitan area bicycle and street network, and the self-reported comfort level of these trips was modeled as a function of user supplied survey responses, temporal characteristics, bicycle facility/street typology, traffic volume, traffic speed, topography, and weather. Cumulative logistic regression models were utilized to quantify how these variables were related to route comfort level within separate variable groups, and then the variables were used in a pooled regression model specified by backwards stepwise selection. The results of these analyses indicated that many of the supplied predictors had significant relationships with route comfort. In particular, bicycle miles traveled on facilities with higher traffic volumes, higher posted speeds, steep grades, and less separation between bicycles and motor vehicles coincided with lower cyclist comfort ratings. User supplied survey responses were also significant, and had a greater overall model variance contribution than objectively measured facility variables. These results align with literature that indicates that built environment variables are important in predicting bicyclist comfort, but user variables may be more important in terms of the variance accounted for. This research outlines unique analysis methods by which future researchers and transportation planners may explore crowdsourced data, and presents the first exploration of bicyclist comfort perception data crowdsourced using a smartphone application.

Page generated in 0.1984 seconds