101 |
"Det första jag gör när jag vaknar är att logga in på Facebook" : En studie om hur användare av de sociala nätverken upplever den användargenererade produktionenMagnusson, Per January 2011 (has links)
Abstract Title: The first thing I do when I wake up is to log on to Facebook. A study regarding how users of Social Networks experiences the User Generated production. (Det första jag gör när jag vaknar är att logga in på Facebook. En studie om hur användare av de sociala nätverken upplever den användargenererande produktionen.) Number of pages: 37 (39 including enclosures) Author: Per Magnusson Tutor: Christian Christensen Course: Media and Communication Studies C Period: Fall semester 2010 University: Division of Media and Communication, Department of Information Science, Uppsala University Aim: The ambition with this essay is to examine how the online users of social networks experience the development of the Internet towards Web 2.0, and the enormous User Generated Content emerging from it. How do we receive User Generated Content from other people, and particularly what do we think about our own production. Material/Method: In order to obtain a deeper understanding of the attitudes and perceptions amongst people regarding this subject I used qualitative interviews with 9 people. To get a breadth in the representation I used three different categories of people. With active bloggers as the first category, people who frequently and actively uses social networks as the second, and as the third and last category I interviewed people who actively logs on to at least one social network without participate to the user generated content. The interviewed people are between 20-30 years old. Main results: The essay shows that the development of Internet, towards web 2.0, has a positive impact on the human life. The reason that the participants in this study produces and participates in social networks is mainly because it is social, fun, educational and a great pastime. Social Networking has become ingrained into people´s lives, and they all like to observe what other people do. It turns out that the general attitude of how you look at your own User Generated Production is that you do not even think about it. The participants do not see the User Generated Content as a production, instead they rather see it as a participation in a social context. Keywords: User Generated Content, Production, Produsage, Web 2.0, Social Networks, Internet, Facebook, Wikipedia.
|
102 |
Cross-lingual Information Retrieval On Turkish And English TextsBoynuegri, Akif 01 April 2010 (has links) (PDF)
In this thesis, cross-lingual information retrieval (CLIR) approaches are comparatively evaluated
for Turkish and English texts. As a complementary study, knowledge-based methods
for word sense disambiguation (WSD), which is one of the most important parts of the CLIR
studies, are compared for Turkish words.
Query translation and sense indexing based CLIR approaches are used in this study. In query
translation approach, we use automatic and manual word sense disambiguation methods and
Google translation service during translation of queries. In sense indexing based approach,
documents are indexed according to meanings of words instead of words themselves. Retrieval
of documents is performed according to meanings of the query words as well. During
the identification of intended meaning of query terms, manual and automatic word sense disambiguation
methods are used and compared to each other.
Knowledge based WSD methods that use different gloss enrichment techniques are compared
for Turkish words. Turkish WordNet is used as a primary knowledge base and English
WordNet and Turkish Wikipedia are employed as enrichment resources. Meanings of
words are more clearly identified by using semantic relations defined in WordNets and Turkish
Wikipedia. Also, during calculation of semantic relatedness of senses, cosine similarity
metric is used as an alternative metric to word overlap count. Effects of using cosine similarity
metric are observed for each WSD methods that use different knowledge bases.
|
103 |
Wikipedia : auslösende und aufrechterhaltende Faktoren der freiwilligen Mitarbeit an einem Web-2.0-Projekt /Schroer, Joachim, January 1900 (has links)
Originally presented as the author's thesis (doctoral)--Universität Würzburg, 2008. / Includes bibliographical references.
|
104 |
Folk Wiki: The shared traditions of folk music and the Wiki wayChamberlin, Phillip Mark 01 June 2006 (has links)
Wiki is often perceived as representing a revolutionary break from conventional notions of authorship, writing, and textual history. Dialogues concerning Wiki tend to ignore the characteristics that Wiki shares with earlier forms of collaboration, particularly folk music. In both Wiki and folk music, content is often collectively shared and authored, even if specific individuals create and change the content. Many collaborators are anonymous, quasi-anonymous, or pseudo-anonymous, but the perception of this anonymity is, in both genres, problematic. Second, both Wiki documents and folk songs exist in the "Eternal Now," a seemingly perpetual state that makes these texts available for addition, division, or deletion. Both forms of text resist finality. Third, both forms of texts can involve complicated textual histories as they split and merge into versions and variants. The geographical spaces involved in this process influence the ultimate outcomes of each version and variant. Finally, much of the language used to describe Wiki can also be used to describe folk music.
|
105 |
Rich Linguistic Structure from Large-Scale Web DataYamangil, Elif 18 October 2013 (has links)
The past two decades have shown an unexpected effectiveness of Web-scale data in natural language processing. Even the simplest models, when paired with unprecedented amounts of unstructured and unlabeled Web data, have been shown to outperform sophisticated ones. It has been argued that the effectiveness of Web-scale data has undermined the necessity of sophisticated modeling or laborious data set curation. In this thesis, we argue for and illustrate an alternative view, that Web-scale data not only serves to improve the performance of simple models, but also can allow the use of qualitatively more sophisticated models that would not be deployable otherwise, leading to even further performance gains. / Engineering and Applied Sciences
|
106 |
Data-rich document geotagging using geodesic gridsWing, Benjamin Patai 07 July 2011 (has links)
This thesis investigates automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document’s raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset. / text
|
107 |
Wikipedia som källa? : Är det accepterat vid studier i ämnet medie- och kommunikationsvetenskap vid Uppsala universitet?Salomon, Susanna January 2008 (has links)
Abstract Title: Wikipedia as a source? Is it accepted in the studies of Media and Communications at Uppsala University? (Wikipedia som källa? Är det accepterat vid studier i ämnet medie- och kommunikationsvetenskap vid Uppsala universitet?) Number of pages: 38 (39 including enclosures) Author: Susanna Salomon Tutor: Else Nygren Course: Media and Communication Studies C Period: Autumn 2007 University: Division of Media and Communication, Department of Information Science, Uppsala University Purpose/ Aim: The purpose is to study whether or not the Internet encyclopedia Wikipedia is an accepted source when a student writes a paper in Media and Communications at Uppsala University Material/ Method: Qualitative research method based on interviews with teachers and on litterature. Main results: The study shows that there is no common view within the faculty whether or not Wikipedia could be used as a source when writing a paper in Media and Communications. Some accept it, others do not. The results show that the teachers of this subject at Uppsala University have not yet decided upon how to adjust to the new large information bank wich is Wikipedia. Keywords: Wikipedia, Uppsala University, sources, reliability, objectivity, collective intelligence, the Internet
|
108 |
Indexation et recherche conceptuelles de documents pédagogiques guidées par la structure de WikipédiaAbi Chahine, Carlo 14 October 2011 (has links) (PDF)
Cette thèse propose un système d'aide à l'indexation et à la recherche de documents pédagogiques fondé sur l'utilisation de Wikipédia.l'outil d'aide à l'indexation permet de seconder les documentalistes dans la validation, le filtrage et la sélection des thématiques, des concepts et des mots-clés issus de l'extraction automatique d'un document. En effectuant une analyse des données textuelles d'un document, nous proposons au documentaliste une liste de descripteurs permettant de représenter et discriminer le document. Le travail du documentaliste se limite alors à une lecture rapide du document et à la sélection et suppression des descripteurs suggérés par le système pour rendre l'indexation homogène, discriminante et exhaustive. Pour cela nous utilisons Wikipédia comme base de connaissances. Le modèle utilisé pour l'extraction des descripteurs permet également de faire de la recherche d'information sur un corpus de document déjà indexé.
|
109 |
Semantic Analysis of Wikipedia's Linked Data Graph for Entity Detection and Topic Identification ApplicationsAlemZadeh, Milad January 2012 (has links)
Semantic Web and Linked Data community is now the reality of the future of the Web. The standards and technologies defined in this field have opened a strong pathway towards a new era of knowledge management and representation for the computing world. The data structures and the semantic formats introduced by the Semantic Web standards offer a platform for all the data and knowledge providers in the world to present their information in a free, publicly available, semantically tagged, inter-linked, and machine-readable structure. As a result, the adaptation of the Semantic Web standards by data providers creates numerous opportunities for development of new applications which were not possible or, at best, hardly achievable using the current state of Web which is mostly consisted of unstructured or semi-structured data with minimal semantic metadata attached tailored mainly for human-readability.
This dissertation tries to introduce a framework for effective analysis of the Semantic Web data towards the development of solutions for a series of related applications. In order to achieve such framework, Wikipedia is chosen as the main knowledge resource largely due to the fact that it is the main and central dataset in Linked Data community. In this work, Wikipedia and its Semantic Web version DBpedia are used to create a semantic graph which constitutes the knowledgebase and the back-end foundation of the framework. The semantic graph introduced in this research consists of two main concepts: entities and topics. The entities act as the knowledge items while topics create the class hierarchy of the knowledge items. Therefore, by assigning entities to various topics, the semantic graph presents all the knowledge items in a categorized hierarchy ready for further processing.
Furthermore, this dissertation introduces various analysis algorithms over entity and topic graphs which can be used in a variety of applications, especially in natural language understanding and knowledge management fields. After explaining the details of the analysis algorithms, a number of possible applications are presented and potential solutions to these applications are provided. The main themes of these applications are entity detection, topic identification, and context acquisition. To demonstrate the efficiency of the framework algorithms, some of the applications are developed and comprehensively studied by providing detailed experimental results which are compared with appropriate benchmarks. These results show how the framework can be used in different configurations and how different parameters affect the performance of the algorithms.
|
110 |
Semantic Analysis of Wikipedia's Linked Data Graph for Entity Detection and Topic Identification ApplicationsAlemZadeh, Milad January 2012 (has links)
Semantic Web and Linked Data community is now the reality of the future of the Web. The standards and technologies defined in this field have opened a strong pathway towards a new era of knowledge management and representation for the computing world. The data structures and the semantic formats introduced by the Semantic Web standards offer a platform for all the data and knowledge providers in the world to present their information in a free, publicly available, semantically tagged, inter-linked, and machine-readable structure. As a result, the adaptation of the Semantic Web standards by data providers creates numerous opportunities for development of new applications which were not possible or, at best, hardly achievable using the current state of Web which is mostly consisted of unstructured or semi-structured data with minimal semantic metadata attached tailored mainly for human-readability.
This dissertation tries to introduce a framework for effective analysis of the Semantic Web data towards the development of solutions for a series of related applications. In order to achieve such framework, Wikipedia is chosen as the main knowledge resource largely due to the fact that it is the main and central dataset in Linked Data community. In this work, Wikipedia and its Semantic Web version DBpedia are used to create a semantic graph which constitutes the knowledgebase and the back-end foundation of the framework. The semantic graph introduced in this research consists of two main concepts: entities and topics. The entities act as the knowledge items while topics create the class hierarchy of the knowledge items. Therefore, by assigning entities to various topics, the semantic graph presents all the knowledge items in a categorized hierarchy ready for further processing.
Furthermore, this dissertation introduces various analysis algorithms over entity and topic graphs which can be used in a variety of applications, especially in natural language understanding and knowledge management fields. After explaining the details of the analysis algorithms, a number of possible applications are presented and potential solutions to these applications are provided. The main themes of these applications are entity detection, topic identification, and context acquisition. To demonstrate the efficiency of the framework algorithms, some of the applications are developed and comprehensively studied by providing detailed experimental results which are compared with appropriate benchmarks. These results show how the framework can be used in different configurations and how different parameters affect the performance of the algorithms.
|
Page generated in 0.017 seconds