• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 29
  • 24
  • 15
  • 12
  • 10
  • 6
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 203
  • 50
  • 37
  • 37
  • 28
  • 28
  • 26
  • 26
  • 24
  • 20
  • 20
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Wikipedia-Based Semantic Enhancements for Information Nugget Retrieval

MacKinnon, Ian January 2008 (has links)
When the objective of an information retrieval task is to return a nugget rather than a document, query terms that exist in a document often will not be used in the most relevant nugget in the document for the query. In this thesis a new method of query expansion is proposed based on the Wikipedia link structure surrounding the most relevant articles selected either automatically or by human assessors for the query. Evaluated with the Nuggeteer automatic scoring software, which we show to have a high correlation with human assessor scores for the ciQA 2006 topics, an increase in the F-scores is found from the TREC Complex Interactive Question Answering task when integrating this expansion into an already high-performing baseline system. In addition, the method for finding synonyms using Wikipedia is evaluated using more common synonym detection tasks.
12

Wikipedia-Based Semantic Enhancements for Information Nugget Retrieval

MacKinnon, Ian January 2008 (has links)
When the objective of an information retrieval task is to return a nugget rather than a document, query terms that exist in a document often will not be used in the most relevant nugget in the document for the query. In this thesis a new method of query expansion is proposed based on the Wikipedia link structure surrounding the most relevant articles selected either automatically or by human assessors for the query. Evaluated with the Nuggeteer automatic scoring software, which we show to have a high correlation with human assessor scores for the ciQA 2006 topics, an increase in the F-scores is found from the TREC Complex Interactive Question Answering task when integrating this expansion into an already high-performing baseline system. In addition, the method for finding synonyms using Wikipedia is evaluated using more common synonym detection tasks.
13

IntelWiki - Recommending Reference Materials in Context to Facilitate Editing Wikipedia

Chowdhury, Mohammad Noor Nawaz January 2014 (has links)
Participation in contributing content to online communities remains heavily skewed. Yet little research has focused on lowering the contribution effort. I describe a general approach to facilitating user-generated content within the context of Wikipedia. I also present the IntelWiki prototype, a design and implementation of this approach, which aims to make it easier for users to create or enhance the free-form text in Wikipedia articles. The IntelWiki system i) recommends article-relevant reference materials, ii) draws the users' attention to key aspects of the recommendations, and iii) allows users to consult the recommended materials in context. A laboratory evaluation with 16 novice Wikipedia editors revealed that, in comparison to the default Wikipedia design, IntelWiki's approach has positive impacts on editing quantity and quality. Participants also reported experiencing significantly lower mental workload while editing with IntelWiki and preferred the new design.
14

Wikipedia : Diskussionsraum und Informationsspeicher im neuen Netz /

Pentzold, Christian. January 2007 (has links) (PDF)
Teilw. zugl.: Chemnitz, Techn. Univ., Masterarbeit, 2006.
15

A Strategy Oriented, Machine Learning Approach to Automatic Quality Assessment of Wikipedia Articles

De La Calzada, Gabriel 01 April 2009 (has links) (PDF)
This work discusses an approach to modeling and measuring information quality of Wikipedia articles. The approach is based on the idea that the quality of Wikipedia articles with distinctly different profiles needs to be measured using different information quality models. To implement this approach, a software framework written in the Java language was developed to collect and analyze information of Wikipedia articles. We report on our initial study, which involved two categories of Wikipedia articles: ”stabilized” (those, whose content has not undergone major changes for a significant period of time) and ”controversial” (articles that have undergone vandalism, revert wars, or whose content is subject to internal discussions between Wikipedia editors). In addition, we present simple information quality models and compare their performance on a subset of Wikipedia articles with the information quality evaluations provided by human users. Our experiment shows that using special-purpose models for information quality captures user sentiment about Wikipedia articles better than using a single model for both categories of articles.
16

Free riding or just surfing : applied ethics.

Aboobaker, Yusuf 08 January 2014 (has links)
The paper in the broadest sense looks to the usage of the internet and our obligations if at all any, there to be. We use the case of Wikipedia as a reference site. We used literature from the free rider problem, we deconstructed the literature into relevant elements, and then built a framework to which the case of Wikipedia can be applied. The results of the application shows, at times, users are not merely surfing when they browse the internet, they are free riding and as such may be morally liable to those internet sites.
17

Construcción automática de cajas de información para Wikipedia

Sáez Binelli, Tomás Andrés January 2018 (has links)
Ingeniero Civil en Computación / Las Infobox son tablas de resumen, que pretenden describir brevemente una entidad mediante la presentación se sus principales características de forma clara y en un formato establecido. Lamentablemente estas Infoboxes son construidas de forma manual por editores de Wikipedia, lo que se traduce en que muchos artículos en idiomas poco frecuentes no cuentan con Infoboxes o éstas son de baja calidad. Utilizando Wikidata como fuente de información, el desafío de este trabajo es ordenar y seleccionar las propiedades y valores según importancia, para lograr una Infobox concisa con la información ordenada según pertenencia. Con este objetivo en mente, este trabajo propone una estrategia de control y 4 estrategias experimentales para la construcción de Infoboxes en forma automática. Durante el desarrollo de este trabajo se implementa una API en Django, que se recibe una petición indicando la entidad, el lenguaje y la estrategia a utilizar para generar la Infobox. Como respuesta se obtiene un JSON que representa la Infobox generada. Se construye adicionalmente una interfaz gráfica que permita una rápida utilización de dicha API y opere como facilitador de un proceso de evaluación comparativo entre las diversas estrategias. La evaluación comparativa se realiza enfrentando a encuestados a un listado de 15 entidades cuyas 5 Infoboxes (una por estrategia) han sido previamente calculadas y dispuestas en forma paralela. Asignando una nota de 1 (menor valoración) a 7, 12 usuarios proceden a evaluar cada Infobox; obteniendo un total de 728 valoraciones. Los resultados indican que la estrategia mejor evaluada combina la frecuencia de una propiedad y el PageRank de su valor como indicadores de orden de importancia.
18

Attityder till Wikipedia : En undersökning bland lärare och studenter på läkar- och psykologprogrammen vid Umeå universitet

Zjajo, Mirza January 2008 (has links)
No description available.
19

Attityder till Wikipedia : En undersökning bland lärare och studenter på läkar- och psykologprogrammen vid Umeå universitet

Zjajo, Mirza January 2008 (has links)
No description available.
20

Värdeskapande och värderingar på ett online community : En studie om Wikipedia

Kaluza, Johan January 2012 (has links)
Bakgrund - Med internets framväxt har nya kommunikationsvägar öppnats och möjliggjort samskapande mellan användare utan koppling till tid eller rum. Wikipedia är ett community som har dragit nytta av den nya teknik, som internet har fört med sig, och sajten är idag den sjätte största på internet. Som alla communities styrs Wikipedia av normer och värderingar (Schouten & McAlexander 1995). Dessa värderingar möjliggör att användarna kan skapa artiklar och därigenom värdet på Wikipedia. Vargo och Lusch (2004) menar dock att organisationer endast kan ge tjänsteerbjudanden och att värdet sedan skapas tillsammans med användaren. Studien kartlägger hur Wikipedias värderingar styr användarna till att skapa artiklar på sajten. Detta kopplas sedan mot tjänsteerbjudandet, som Wikipedia ger deras användare, för att se hur värderingarna påverkar värdeskapandet.   Problemformulering - Hur styr gemensamma värderingar värdeskapandet på Wikipedia? Referensram - Teorin som ingår i referensramen kommer behandla området styrning av communities (Schouten & McAlexander 1995) samt hur värde samskapas i processer (Vargo & Lusch 2004). Då Wikipedia är ett online community finns också ett avsnitt om hur värden skapas inom så kallade brand communities (Schau et al. 2009). Metod - Studien har genomförts genom användandet av en kvalitativ metod som kallas netnografi. Metoden tar sin utgångspunkt i Kozinets (2002, 2006) fyra steg för hur en netnografisk studie ska genomföras men den har dock modifierats i enlighet med Langer och Beckman (2005). Diskussion och slutsats - Beroende på hur värderingarna styr skapandet av artiklarna kommer Wikipedias tjänsteerbjudande att stämma överens med det som communitiet skapar. Värderingarna kan styra skapandet av artiklar så att tjänsteerbjudandet kan följa fem olika vägar, från att fundamentalt följa Wikipedias officiella principer, till att negligera principerna. Utifrån Echeverri och Skålén (2011) har sedan förutsättningarna för Wikipedias värdeskapande tagits fram.

Page generated in 0.0154 seconds