• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 28
  • 25
  • 13
  • 13
  • 12
  • 11
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 333
  • 47
  • 43
  • 34
  • 33
  • 33
  • 33
  • 32
  • 29
  • 29
  • 28
  • 27
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Découverte des relations dans les réseaux sociaux / Relationship discovery in social networks

Raad, Elie 22 December 2011 (has links)
Les réseaux sociaux occupent une place de plus en plus importante dans notre vie quotidienne et représentent une part considérable des activités sur le web. Ce succès s’explique par la diversité des services/fonctionnalités de chaque site (partage des données souvent multimédias, tagging, blogging, suggestion de contacts, etc.) incitant les utilisateurs à s’inscrire sur différents sites et ainsi à créer plusieurs réseaux sociaux pour diverses raisons (professionnelle, privée, etc.). Cependant, les outils et les sites existants proposent des fonctionnalités limitées pour identifier et organiser les types de relations ne permettant pas de, entre autres, garantir la confidentialité des utilisateurs et fournir un partage plus fin des données. Particulièrement, aucun site actuel ne propose une solution permettant d’identifier automatiquement les types de relations en tenant compte de toutes les données personnelles et/ou celles publiées. Dans cette étude, nous proposons une nouvelle approche permettant d’identifier les types de relations à travers un ou plusieurs réseaux sociaux. Notre approche est basée sur un framework orientéutilisateur qui utilise plusieurs attributs du profil utilisateur (nom, age, adresse, photos, etc.). Pour cela, nous utilisons des règles qui s’appliquent à deux niveaux de granularité : 1) au sein d’un même réseau social pour déterminer les relations sociales (collègues, parents, amis, etc.) en exploitant principalement les caractéristiques des photos et leurs métadonnées, et, 2) à travers différents réseaux sociaux pour déterminer les utilisateurs co-référents (même personne sur plusieurs réseaux sociaux) en étant capable de considérer tous les attributs du profil auxquels des poids sont associés selon le profil de l’utilisateur et le contenu du réseau social. À chaque niveau de granularité, nous appliquons des règles de base et des règles dérivées pour identifier différents types de relations. Nous mettons en avant deux méthodologies distinctes pour générer les règles de base. Pour les relations sociales, les règles de base sont créées à partir d’un jeu de données de photos créées en utilisant le crowdsourcing. Pour les relations de co-référents, en utilisant tous les attributs, les règles de base sont générées à partir des paires de profils ayant des identifiants de mêmes valeurs. Quant aux règles dérivées, nous utilisons une technique de fouille de données qui prend en compte le contexte de chaque utilisateur en identifiant les règles de base fréquemment utilisées. Nous présentons notre prototype, intitulé RelTypeFinder, que nous avons implémenté afin de valider notre approche. Ce prototype permet de découvrir différents types de relations, générer des jeux de données synthétiques, collecter des données du web, et de générer les règles d’extraction. Nous décrivons les expériementations que nous avons menées sur des jeux de données réelles et syntéthiques. Les résultats montrent l’efficacité de notre approche à découvrir les types de relations. / In recent years, social network sites exploded in popularity and become an important part of the online activities on the web. This success is related to the various services/functionalities provided by each site (ranging from media sharing, tagging, blogging, and mainly to online social networking) pushing users to subscribe to several sites and consequently to create several social networks for different purposes and contexts (professional, private, etc.). Nevertheless, current tools and sites provide limited functionalities to organize and identify relationship types within and across social networks which is required in several scenarios such as enforcing users’ privacy, and enhancing targeted social content sharing, etc. Particularly, none of the existing social network sites provides a way to automatically identify relationship types while considering users’ personal information and published data. In this work, we propose a new approach to identify relationship types among users within either a single or several social networks. We provide a user-oriented framework able to consider several features and shared data available in user profiles (e.g., name, age, interests, photos, etc.). This framework is built on a rule-based approach that operates at two levels of granularity: 1) within a single social network to discover social relationships (i.e., colleagues, relatives, friends, etc.) by exploiting mainly photos’ features and their embedded metadata, and 2) across different social networks to discover co-referent relationships (same real-world persons) by considering all profiles’ attributes weighted by the user profile and social network contents. At each level of granularity, we generate a set of basic and derived rules that are both used to discover relationship types. To generate basic rules, we propose two distinct methodologies. On one hand, social relationship basic rules are generated from a photo dataset constructed using crowdsourcing. On the other hand, using all weighted attributes, co-referent relationship basic rules are generated from the available pairs of profiles having the same unique identifier(s) attribute(s) values. To generate the derived rules, we use a mining technique that takes into account the context of users, namely by identifying frequently used valid basic rules for each user. We present here our prototype, called RelTypeFinder, implemented to validate our approach. It allows to discover appropriately different relationship types, generate synthetic datesets, collect web data and photo, and generate mining rules. We also describe here the sets of experiments conducted on real-world and synthetic datasets. The evaluation results demonstrate the efficiency of the proposed relationship discovery approach.
292

Crowdsourcing in pay-as-you-go data integration

Osorno Gutierrez, Fernando January 2016 (has links)
In pay-as-you-go data integration, feedback can inform the regeneration of different aspects of a data integration system, and as a result, helps to improve the system's quality. However, feedback could be expensive as the amount of feedback required to annotate all the possible integration artefacts is potentially big in contexts where the budget can be limited. Also, feedback could be used in different ways. Feedback of different types and in different orders could have different effects in the quality of the integration. Some feedback types could give rise to more benefit than others. There is a need to develop techniques to collect feedback effectively. Previous efforts have explored the benefit of feedback in one aspect of the integration. However, the contributions have not considered the benefit of different feedback types in a single integration task. We have investigated the annotation of mapping results using crowdsourcing, and implementing techniques for reliability. The results indicate that precision estimates derived from crowdsourcing improve rapidly, suggesting that crowdsourcing can be used as a cost-effective source of feedback. We propose an approach to maximize the improvement of data integration systems given a budget for feedback. Our approach takes into account the annotation of schema matchings, mapping results and pairs of candidate record duplicates. We define a feedback plan, which indicates the type of feedback to collect, the amount of feedback to collect and the order in which different types of feedback are collected. We defined a fitness function and a genetic algorithm to search for the most cost-effective feedback plans. We implemented a framework to test the application of feedback plans and measure the improvement of different data integration systems. In the framework, we use a greedy algorithm for the selection of mappings. We designed quality measures to estimate the quality of a dataspace after the application of a feedback plan. For the evaluation of our approach, we propose a method to generate synthetic data scenarios. We evaluate our approach in scenarios with different characteristics. The results showed that the generated feedback plans achieved higher quality values than the randomly generated feedback plans in several scenarios.
293

Combined decision making with multiple agents

Simpson, Edwin Daniel January 2014 (has links)
In a wide range of applications, decisions must be made by combining information from multiple agents with varying levels of trust and expertise. For example, citizen science involves large numbers of human volunteers with differing skills, while disaster management requires aggregating information from multiple people and devices to make timely decisions. This thesis introduces efficient and scalable Bayesian inference for decision combination, allowing us to fuse the responses of multiple agents in large, real-world problems and account for the agents’ unreliability in a principled manner. As the behaviour of individual agents can change significantly, for example if agents move in a physical space or learn to perform an analysis task, this work proposes a novel combination method that accounts for these time variations in a fully Bayesian manner using a dynamic generalised linear model. This approach can also be used to augment agents’ responses with continuous feature data, thus permitting decision-making when agents’ responses are in limited supply. Working with information inferred using the proposed Bayesian techniques, an information-theoretic approach is developed for choosing optimal pairs of tasks and agents. This approach is demonstrated by an algorithm that maintains a trustworthy pool of workers and enables efficient learning by selecting informative tasks. The novel methods developed here are compared theoretically and empirically to a range of existing decision combination methods, using both simulated and real data. The results show that the methodology proposed in this thesis improves accuracy and computational efficiency over alternative approaches, and allows for insights to be determined into the behavioural groupings of agents.
294

Harnessing Collective Intelligence for Translation: An Asssessment of Crowdsourcing as a Means of Bridging the Canadian Linguistic Digital Divide

O'Brien, Steven January 2011 (has links)
This study attempts to shed light on the efficacy of crowdsourcing as a means of translating web content in Canada. Within, we seek to explore and understand if a model can be created that can estimate the effectiveness of crowdsourced translation as a means of bridging the Canadian Linguistic Digital Divide. To test our hypotheses and models, we use structural equation modeling techniques coupled with confidence intervals for comparing experimental crowdsourced translation to both professional and machine translation baselines. Furthermore, we explore a variety of factors which influence the quality of the experimental translations, how those translations performed in the context of their source text, and the ways in which the views of the quality of the experimental translations were measured before and after participants were made aware of how the experimental translations were created.
295

Dokumentation crowdgesourct: Social Tagging als Methode der Inhaltserschließung im Museum

Weinhold, Julia 30 May 2016 (has links)
Museumsobjekte werden seit jeher in ihrer jeweiligen Institution von Fachpersonal dokumentiert und indexiert, also mittels Schlagworten inhaltlich beschrieben und kategorisiert. Dies dient vor allem dem Zweck, die Datensätze in den Datenbanken später schnell wieder aufzufinden. Indexiert statt dem Fachpersonal nun eine anonyme Nutzermasse im Internet, die Crowd, gemeinsam und völlig frei z.B. online zur Verfügung stehende Objektdatensätze, nennt man diesen Vorgang Social Tagging – Soziales Verschlagworten. In den USA und Großbritannien hat man bereits in größeren Projekten wie der steve.museum-Studie erprobt, welche Effekte Social Tagging im Museumsbereich bewirken kann. Auch das Brooklyn Museum mit Tag You’re It!, und die britische Public Catalogue Foundation in Kooperation mit der BBC mit BBC Your Paintings sind wichtige Beispielprojekte im Museumsbereich. In Deutschland werden Crowdsourcing-Methoden wie Social Tagging hingegen noch mit Skepsis betrachtet. In der vorliegenden Arbeit soll überprüft werden, ob und welche Vorbehalte gegen Social Tagging im Museum wirklich begründet sind. Sie soll aufzeigen, welche Chancen die Methode im Museum bietet, aber auch welche Risiken und Problemfelder damit verbunden sind. Wann der Einsatz sinnvoll ist und wie er erfolgreich geplant werden kann, soll ebenso thematisiert werden. Besonders die zwei einzigen deutschen Tagging-Projekte im Museums- und Kulturbereich, ARTigo von der Ludwig-Maximilians-Universität München und Tag.Check.Score vom Ethnologischen Museum Berlin werden dabei im Mittelpunkt stehen, denn in zwei Experteninterviews mit leitenden Mitarbeitern konnten genaue Einblicke in die Abläufe, Planungen, Konzepte und Ergebnisse dieser Projekte gewonnen werden.:1. Einleitung................................................................................................4 2. Social Tagging – Definition und Einordnung...........................................8 2.1 Kurze Historie........................................................................................8 2.2 Social Tagging im informationswissenschaftlichen Kontext ..................9 2.3 Social Tagging als Teilbereich des Crowdsourcings.............................11 2.4 Erste Projekte im Museumsbereich .....................................................15 2.5 Deutsche Projekte im Museums- und Kulturbereich ............................20 2.5.1 ARTigo ..............................................................................................20 2.5.2 Tag.Check.Score................................................................................25 3. Chancen und Risiken .............................................................................29 3.1 Chancen und Perspektiven ..................................................................29 3.1.1 Verbesserung des Retrieval ..............................................................29 3.1.2 Assoziativere Zugänge zur Sammlung ..............................................32 3.1.3 Besucherbindung und Partizipation ...................................................34 3.1.4 Ressourceneinsparung ......................................................................37 3.1.5 Zukünftige Legitimationsgrundlage für Geisteswissenschaften .........41 3.2 Risiken und Problemfelder .....................................................................43 3.2.1 Missbrauchsgefahr...............................................................................43 3.2.2 Mangelnde Qualität..............................................................................45 3.2.3 Bild- und Medienlizenzierung...............................................................51 4. Grundbedingungen....................................................................................55 4.1 Neues Selbstverständnis ........................................................................55 4.2 Klare Zuständigkeiten ..............................................................................56 4.3 Nachhaltige Ergebnisnutzung und Verstetigung .....................................57 4.4 IT- und Personalressourcen .....................................................................60 4.5 Heterogenität und Qualität des digitalisierten Materials............................61 5. Zusammenfassung.....................................................................................63 6. Literatur ......................................................................................................66 Anhang
296

SEQUENTIAL INFORMATION ACQUISITION AND DECISION MAKING IN DESIGN CONTESTS: THEORETICAL AND EXPERIMENTAL STUDIES

Murtuza Shergadwala (9183527) 30 July 2020 (has links)
<p>The primary research question of this dissertation is, \textit{How do contestants make sequential design decisions under the influence of competition?} To address this question, I study the influence of three factors, that can be controlled by the contest organizers, on the contestants' sequential information acquisition and decision-making behaviors. These factors are (i) a contestant's domain knowledge, (ii) framing of a design problem, and (iii) information about historical contests. The \textit{central hypothesis} is that by conducting controlled behavioral experiments we can acquire data of contestant behaviors that can be used to calibrate computational models of contestants' sequential decision-making behaviors, thereby, enabling predictions about the design outcomes. The behavioral results suggest that (i) contestants better understand problem constraints and generate more feasible design solutions when a design problem is framed in a domain-specific context as compared to a domain-independent context, (ii) contestants' efforts to acquire information about a design artifact to make design improvements are significantly affected by the information provided to them about their opponent who is competing to achieve the same objectives, and (iii) contestants make information acquisition decisions such as when to stop acquiring information, based on various criteria such as the number of resources, the target objective value, and the observed amount of improvement in their design quality. Moreover, the threshold values of such criteria are influenced by the information the contestants have about their opponent. The results imply that (i) by understanding the influence of an individual's domain knowledge and framing of a problem we can provide decision-support tools to the contestants in engineering design contexts to better acquire problem-specific information (ii) we can enable contest designers to decide what information to share to improve the quality of the design outcomes of design contest, and (iii) from an educational standpoint, we can enable instructors to provide students with accurate assessments of their domain knowledge by understanding students' information acquisition and decision making behaviors in their design projects. The \textit{primary contribution} of this dissertation is the computational models of an individual's sequential decision-making process that incorporate the behavioral results discussed above in competitive design scenarios. Moreover, a framework to conduct factorial investigations of human decision making through a combination of theory and behavioral experimentation is illustrated. <br></p>
297

Data4City – A Hyperlocal Citizen App

Urban, Adam, Hick, David, Noennig, Jörg Rainer 29 April 2019 (has links)
Exploring upon the phenomena of smart cities, this paper elaborates the potential of crowdsourced data collection in small scale urban quarters. The development of the Data4City (D4C) hyperlocal app – PinCity – is based on the idea of increasing the density of real-time information in urban areas (urban neighborhoods) in order to optimize or create innovative urban services (such as public transportation, garbage collection) or urban planning, thus improving the quality of life of quarter inhabitants as a long-term goal. The main principle of the app is the small-scale implementation, as opposed to top-down smart city approaches worldwide, preferably in a city quarter, or a community, which can be subsequently scaled and interlaced to other parts of the city.
298

Online fundraising a mikrofinancování v sociální síti a na Webu 2.0 / Online fundraising and microfinance in the context of social networks and Web 2.0

Richterová, Daniela January 2011 (has links)
Identifikační záznam Richterová, Daniela. On line fundraising a mikrofinancování v sociální síti a na Webu 2.0 [Online fundraising and microfinace in the context of social networks and Web 2.0]. Praha, 2011. 102 s. Diplomová práce. Univerzita Karlova v Praze, Filozofická fakulta, Ústav informačních studií a knihovnictví 2011. Vedoucí diplomové práce Mgr. Denisa Kera, Ph.D. Abstract The main focus of this thesis is to characterize online fundraising and microfinance tools in the context of Web 2.0 and social media. The main goal is to analyze and assess the existing philanthropic portals in the Czech Republic and abroad. The thesis includes a theoretical explanation of terminology, principles and technologies of Web 2.0 and social media platforms and their connection to online philanthropy. New phenomena such as crowdfunding, P2P lending and crowdsourcing are clarified. The analytical part focuses on the evaluation of fundraising portals Network for Good and GlobalGiving and microfinace portals Kiva and LoanBack. Czech philanthropic online projects are represented by Šance pro draha, myELEN, Daruj správně and Skutečný dárek. Particular emphasis is given to monitoring and evaluating the extent of Web 2.0 principles and technologies integration. The conclusions contain suggestions for implementing the findings...
299

Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping

Kadlec, Jiri 01 March 2016 (has links) (PDF)
This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed to combine volunteer snow reports, cross-country ski track reports and station measurements to fill cloud gaps in the MODIS snow cover product. The method is demonstrated by producing a continuous daily time step snow presence probability map dataset for the Czech Republic region. The ability of the presented methodology to reconstruct MODIS snow cover under cloud is validated by simulating cloud cover datasets and comparing estimated snow cover to actual MODIS snow cover. The percent correctly classified indicator showed accuracy between 80 and 90% using this method. Using crowdsourcing data (volunteer snow reports and ski tracks) improves the map accuracy by 0.7 – 1.2 %. The output snow probability map data sets are published online using web applications and web services.
300

Efficient Disambiguation of Task Instructions in Crowdsourcing

Venkata Krishna Chaithanya Manam (15354805) 27 April 2023 (has links)
<p>Crowdsourcing allows users to offload tedious work to an on-demand workforce. However, the time saved by the requesters is often offset by the time they must spend preparing instructions and refining them to address the ambiguities that typically arise. If crowdsourcing is to become viable, and result in net gains for requesters, requesters must be able to obtain high-quality results with a low investment of time in writing instructions. That might mean finding ways to accommodate hastily written instructions. Instruction quality could be improved by resolving ambiguities either with help of crowd workers, or by using NLP-based tools. </p> <p><br></p> <p>In this dissertation, I present 1) a taxonomy of ambiguities that can occur in task instructions, 2) a workflow that enables requesters to resolve ambiguities before posting them to workers, 3) a set of methods to improve the quality of instructions while workers are</p> <p>working on the task, and finally, 4) a system that leverages current NLP technologies to detect ambiguities automatically before they are posted to the workers.</p>

Page generated in 0.0674 seconds