191 |
Learning in public: information literacy and participatory mediaForte, Andrea 06 July 2009 (has links)
This research examines new systems of information production that are made possible by participatory media. Such systems bring about two critical information literacy needs for the general public: to understand new systems in order to assess their products and to become adept participants in the construction of public information spaces. In this dissertation, I address both of these needs and propose a view of information literacy that situates the information literate as both consumer and producer. First, I examine a popular example of a new publishing system, Wikipedia, and present research that explains how the site is organized and maintained. I then turn my attention to the classroom and describe three iterations of design-based research in which I built new wiki tools to support publication activities and information literacy learning in formal educational contexts. I use the rhetorical notion of genre as an analytic lens for studying the use and impact of these new media in schools. Classroom findings suggest that the affordances of a wiki as an open, transparent publishing medium can support groups of writers in building a shared understanding of genre as they struggle with an unfamiliar rhetorical situation. I also demonstrate how writing on a public wiki for a broad audience was a particularly useful writing experience that brought about opportunities for reflection and learning. These opportunities include transforming the value of citation, creating a need to engage deeply with content, and providing both a need and a foundation for assessing information resources.
|
192 |
An investigation of Wikipedia translation as an additive pedagogy for Oshikwanyama first language learningHautemo, Aletta Mweneni January 2014 (has links)
The integration of Information and Communication Technology in the indigenous language classroom lags behind compared to other subjects. In many ways, indigenous language teachers find it difficult and to some extent, impossible to integrate ICT into their classroom activities. The focus of this study is to explore the ways in which ICT could be used as a learning tool in an Oshikwanyama First Language classroom. I investigated the use of Wikipedia translation as an additional teaching and learning tool. I concentrated on the impact that ICT tools have on learning, and the motivation it has on learners to learn Oshikwanyama. This qualitative case study was conducted in an urban school in northern Namibia. The adoption of ICT at the school is good as there is a full-fledged computer lab with unlimited wireless internet access. This was a requirement for the project to enable the participants to work online. I purposefully chose higher-level learners (Secondary phase) for this study. I conducted a survey with them on their access to and use of ICT devices in their daily lives, and thereafter conducted a basic computer workshop and a Wikipedia translation project with them. My research findings show that although the use of ICT is part of the learners’ lives, most of the communication through ICT devices is done in English not Oshikwanyama. Wikipedia translation offers a stimulating learning platform for learners to learn Oshikwanyama and English at the same time and this improved their performance in both languages. Furthermore, the Wikipedia translation, which was done collaboratively, gave learners the confidence to work with other learners to create knowledge. Lastly, Wikipedia translation motivates learners to learn Oshikwanyama and use it in their daily ICT interaction.
|
193 |
Entrepôt de textes : de l'intégration à la modélisation multidimensionnelle de données textuelles / Text Warehouses : from the integration to the multidimensional modeling of textual dataAknouche, Rachid 26 April 2014 (has links)
Le travail présenté dans ce mémoire vise à proposer des solutions aux problèmes d'entreposage des données textuelles. L'intérêt porté à ce type de données est motivé par le fait qu'elles ne peuvent être intégrées et entreposées par l'application de simples techniques employées dans les systèmes décisionnels actuels. Pour aborder cette problématique, nous avons proposé une démarche pour la construction d'entrepôts de textes. Elle couvre les principales phases d'un processus classique d'entreposage des données et utilise de nouvelles méthodes adaptées aux données textuelles. Dans ces travaux de thèse, nous nous sommes focalisés sur les deux premières phases qui sont l'intégration des données textuelles et leur modélisation multidimensionnelle. Pour mettre en place une solution d'intégration de ce type de données, nous avons eu recours aux techniques de recherche d'information (RI) et du traitement automatique du langage naturel (TALN). Pour cela, nous avons conçu un processus d'ETL (Extract-Transform-Load) adapté aux données textuelles. Il s'agit d'un framework d'intégration, nommé ETL-Text, qui permet de déployer différentes tâches d'extraction, de filtrage et de transformation des données textuelles originelles sous une forme leur permettant d'être entreposées. Certaines de ces tâches sont réalisées dans une approche, baptisée RICSH (Recherche d'information contextuelle par segmentation thématique de documents), de prétraitement et de recherche de données textuelles. D'autre part, l'organisation des données textuelles à des fins d'analyse est effectuée selon TWM (Text Warehouse Modelling), un nouveau modèle multidimensionnel adapté à ce type de données. Celui-ci étend le modèle en constellation classique pour prendre en charge la représentation des textes dans un environnement multidimensionnel. Dans TWM, il est défini une dimension sémantique conçue pour structurer les thèmes des documents et pour hiérarchiser les concepts sémantiques. Pour cela, TWM est adossé à une source sémantique externe, Wikipédia, en l'occurrence, pour traiter la partie sémantique du modèle. De plus, nous avons développé WikiCat, un outil pour alimenter la dimension sémantique de TWM avec des descripteurs sémantiques issus de Wikipédia. Ces deux dernières contributions complètent le framework ETL-Text pour constituer le dispositif d'entreposage des données textuelles. Pour valider nos différentes contributions, nous avons réalisé, en plus des travaux d'implémentation, une étude expérimentale pour chacune de nos propositions. Face au phénomène des données massives, nous avons développé dans le cadre d'une étude de cas des algorithmes de parallélisation des traitements en utilisant le paradigme MapReduce que nous avons testés dans l'environnement Hadoop. / The work, presented in this thesis, aims to propose solutions to the problems of textual data warehousing. The interest in the textual data is motivated by the fact that they cannot be integrated and warehoused by using the traditional applications and the current techniques of decision-making systems. In order to overcome this problem, we proposed a text warehouses approach which covers the main phases of a data warehousing process adapted to textual data. We focused specifically on the integration of textual data and their multidimensional modeling. For the textual data integration, we used information retrieval (IR) techniques and automatic natural language processing (NLP). Thus, we proposed an integration framework, called ETL-Text which is an ETL (Extract- Transform- Load) process suitable for textual data. The ETL-Text performs the extracting, filtering and transforming tasks of the original textual data in a form allowing them to be warehoused. Some of these tasks are performed in our RICSH approach (Contextual information retrieval by topics segmentation of documents) for pretreatment and textual data search. On the other hand, the organization of textual data for the analysis is carried out by our proposed TWM (Text Warehouse Modelling). It is a new multidimensional model suitable for textual data. It extends the classical constellation model to support the representation of textual data in a multidimensional environment. TWM includes a semantic dimension defined for structuring documents and topics by organizing the semantic concepts into a hierarchy. Also, we depend on a Wikipedia, as an external semantic source, to achieve the semantic part of the model. Furthermore, we developed WikiCat, which is a tool permit to feed the TWM semantic dimension with semantics descriptors from Wikipedia. These last two contributions complement the ETL-Text framework to establish the text warehouse device. To validate the different contributions, we performed, besides the implementation works, an experimental study for each model. For the emergence of large data, we developed, as part of a case study, a parallel processing algorithms using the MapReduce paradigm tested in the Apache Hadoop environment.
|
194 |
Rozhraní pro aspektové vyhledávání v indexu Wikipedie / Interfaces for Faceted Search in Indexed WikipediaCilip, Peter January 2018 (has links)
Main aim of this thesis is to study existing systems of faceted search and to design own system based on faceted search in the index of Wikipedia. In this thesis we can meet with existing solutions of faceted search. From mistakes and failures of existing solutions was designed our own system, that is output of this thesis. Designed system is described in way of design and implementation. Product of thesis is application and graphical interface. Application interface can be integrated into existing informational system, where it can be used as multidimensional filter. Graphical interface provides option how can application interface be used in real system. System was created focusing on usefullness and simplicity, for using in existing information systems.
|
195 |
Die aktuelle EnzyklopädiePentzold, Christian, Seidenglanz, Sebastian 06 August 2008 (has links)
Der Beitrag vergleicht die klassische Lexikonproduktion mit den Funktionen und Eigenschaften der Online-Enzyklopädie Wikipedia. Dabei erläutert er zunächst die historische Dimension des enzyklopädischen Prinzips. Daran anschließend beschreibt er den Aufbau und die Arbeitsprozesse einer Lexikonredaktion, wobei besonders auf die Selektionslogik und die Kriterien der Lexikonwürdigkeit eingegangen wird. Neben Fragen der Informationsqualität werden diese Punkte in einem weiteren Schritt auch bei der Erläuterung der Arbeitsweise und Struktur der Wikipedia angesprochen. Abschließend skizziert der Beitrag an einem Beispiel die Prozesse und Dynamiken der zeitlich nur geringfügig versetzten Artikelproduktion im Anschluss eines aktuellen Ereignisses. / The paper compares the classical production of printed encyclopedias with the functions and features of their online counterpart Wikipedia. Therefore, it firstly goes into the historic dimension of the encyclopedic principle. On that basis, organisation and work processes of an editorial department are described. In doing so, it secondly highlights the logic of selection and the criteria of ‘encyclopedianess’. In a third step these focal points and the questions of information quality will be addressed while discussing the structure and processes of Wikipedia. Finally, the paper exemplarily outlines the dynamics of the almost simultaneous production of an article following a current event.
|
196 |
Genderový pohled na prezentaci žen na Wikipedii / Gender View on Female Presentation on WikipediaStančíková, Ľubica January 2019 (has links)
This thesis examines, characterizes and quantifies how women and men are being presented on Czech and Slovak Wikipedia and further elaborates on their possible differences. The theoretical part of the thesis describes the current state of Wikipedia and media in general and draws attention to gender inequality within its editorial base, which is linked to the general trend in free culture communities, where there is also a visible inequality. Furthermore, the theoretical part of the thesis also deals with the issue of gender in general and summarizes current state of knowledge and methods of studying the issue of gender on Wikipedia. The practical part of this thesis partly replicates the study First Gender, Second Sex: Gender Bias on Wikipedia (Graells- Garrido, Lalmas and Menczer, 2015) using RStudio programme to do basic text mining and quantitative analysis of biographical texts of men and women in both languages, tracking word frequencies from selected word categories, namely Gender, Reference to the opposite sex, Family and family status and Career. Overall, we have analysed 24 510 Slovak biographical articles and 110 866 Czech biographical articles, and our findings have confirmed an imbalance and stereotyping in the presentation of women on Wikipedia in both languages.
|
197 |
From Diderot to Software Bot: The Evolution of Encyclopedias in Historical StudyChamberlain, Ryan 26 May 2023 (has links)
No description available.
|
198 |
Frihetens rike : Wikipedianer om sin praktik, sitt produktionssätt och kapitalismenLund, Arwid January 2015 (has links)
This study is about voluntary productive activities in digital networks and on digital platforms that often are described as pleasurable. The aim of the study is to relate the peer producers’ perceptions of their activities on a micro level in terms of play, game, work and labour, to their views on Wikipedia’s relation to capitalism on a macro level, to compare the identified ideological formations on both levels and how they relate to each other, and finally compare the identified ideological formations with contemporary Marxist theory on cognitive capitalism. The intention is to perform a critical evaluation of the economic role of peer production in society.Qualitative and semi-structured interviews with eight Wikipedians active within the Swedish language version of Wikipedia constitute the empirical base of the study together with one public lecture by a Wikipedian on the encyclopaedia and a selection of pages in the encyclopaedia that are text analysed. The transcribed interviews have been analysed using a version of ideological analysis as it has been developed by the Gothenburg School. The views on the peer producing activities on the micro level has been analysed in a dialectical way but is also grounded in a specific field model.Six ideological formations are identified in the empirical material. On the micro level: the peripheral, bottom-up- and top-down-formation, on the macro level: the Californian alikeness ideology, communism of capital and capitalism of communism. Communism of capital has two sides to it: one stresses the synergies and the other the conflicts between the two phenomena. The formations on the macro level conform broadly to contemporary Marxist theory, but there are important differences as well. The study results in a hypothesis that the critical side of communism of capital and the peripheral and bottom-up-formation could help to further a more sustainable capitalism of communism, and counteract a deeper integration of the top-down-formation with Californian alikeness ideology. The latter is the main risk of capitalist co-optation of the peer production that is underway as the manifestly dominant formations on the macro level are Californian alikeness ideology and communism of capital. / <p>©<strong> </strong>2015 Arwid Lund, used under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 license: http://creativecommons.org/licenses/by-nc-nd/3.0/</p><p></p>
|
199 |
Information triage : dual-process theory in credibility judgments of web-based resourcesAumer-Ryan, Paul R. 29 September 2010 (has links)
This dissertation describes the credibility judgment process using social psychological theories of dual-processing, which state that information processing outcomes are the result of an interaction “between a fast, associative information- processing mode based on low-effort heuristics, and a slow, rule-based information processing mode based on high-effort systematic reasoning” (Chaiken & Trope, 1999, p. ix). Further, this interaction is illustrated by describing credibility judgments as a choice between examining easily identified peripheral cues (the messenger) and content (the message), leading to different evaluations in different settings.
The focus here is on the domain of the Web, where ambiguous authorship, peer- produced content, and the lack of gatekeepers create an environment where credibility judgments are a necessary routine in triaging information. It reviews the relevant literature on existing credibility frameworks and the component factors that affect credibility judgments. The online encyclopedia (instantiated as Wikipedia and Encyclopedia Britannica) is then proposed as a canonical form to examine the credibility judgment process.
The two main claims advanced here are (1) that information sources are composed of both message (the content) and messenger (the way the message is delivered), and that the messenger impacts perceived credibility; and (2) that perceived credibility is tempered by information need (individual engagement). These claims were framed by the models proposed by Wathen & Burkell (2002) and Chaiken (1980) to forward a composite dual process theory of credibility judgments, which was tested by two experimental studies. The independent variables of interest were: media format (print or electronic); reputation of source (Wikipedia or Britannica); and the participant’s individual involvement in the research task (high or low).
The results of these studies encourage a more nuanced understanding of the credibility judgment process by framing it as a dual-process model, and showing that certain mediating variables can affect the relative use of low-effort evaluation and high- effort reasoning when forming a perception of credibility. Finally, the results support the importance of messenger effects on perceived credibility, implying that credibility judgments, especially in the online environment, and especially in cases of low individual engagement, are based on peripheral cues rather than an informed evaluation of content. / text
|
200 |
Vyhledávání informací v české Wikipedii / Information Retrieval in Czech WikipediaBalgar, Marek January 2011 (has links)
The main task of this Masters Thesis is to understand questions of information retrieval and text classifi cation. The main research is focused on the text data, the semantic dictionaries and especially the knowledges inferred from the Wikipedia. In this thesis is also described implementation of the querying system, which is based on achieved knowledges. Finally properties and possible improvements of the system are talked over.
|
Page generated in 0.0296 seconds