• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 283
  • 173
  • 145
  • 77
  • 21
  • 21
  • 21
  • 19
  • 17
  • 17
  • 13
  • 9
  • 9
  • 4
  • 3
  • Tagged with
  • 879
  • 153
  • 138
  • 136
  • 108
  • 108
  • 91
  • 81
  • 80
  • 78
  • 75
  • 74
  • 73
  • 71
  • 70
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

RSS Feeds, Browsing and End-User Engagement

West, Mary Beth 01 April 2011 (has links)
Despite the vast amount of research that has been devoted separately to the topics of browsing and Really Simple Syndication (RSS) aggregation architecture, little is known about how end-users engage with RSS feeds and how they browse while using a feed aggregate. This study explores the browsing behaviors end-users exhibit when using RSS and Atom feeds. The researcher analyzed end-users’ browsing experiences and discusses browsing variations. The researcher observed, tested, and interviewed eighteen (N=18) undergraduate students at the University of Tennessee to determine how end-users engage with RSS feeds. This study evaluates browsing using two variations of tasks, (1) an implicit task with no final goal and (2) an explicit task with a final goal. The researcher observed the participants complete the two tasks and conducted exit interviews, which addressed the end-users’ experiences with Google Reader and provided further explanation of browsing behaviors. The researcher analyzed the browsing behaviors based upon Bates’ (2007) definitions and characteristics of browsing. The results of this exploratory research provide insights into end-user interaction with RSS feeds.
352

Where Google Scholar Stands on Art: An Evaluation of Content Coverage in Online Databases

Hannah M. Noll 2008 April 1900 (has links)
This study evaluates the content coverage of Google Scholar and three commercial databases (Arts & Humanities Citation Index, Bibliography of the History of Art and Art Full Text/Art Index Retrospective) on the subject of art history. Each database is tested using a bibliography method and evaluated based on Péter Jacsó’s scope criteria for online databases. Of the 472 articles tested, Google Scholar indexed the smallest number of citations (35%), outshone by the Arts & Humanities Citation Index which covered 73% of the test set. This content evaluation also examines specific aspects of coverage to find that in comparison to the other databases, Google Scholar provides consistent coverage over the time range tested (1975-2008) and considerable access to article abstracts (56%). Google Scholar failed, however, to fully index the most frequently cited art periodical in the test set, the Artforum International. Finally, Google Scholar’s total citation count is inflated by a significant percentage (23%) of articles which include duplicate, triplicate or multiple versions of the same record.
353

(Media) Made in China : En komparativ studie om regimkritik och regimsupport i kinesiska Internetbaserade nyhetstidningar

Andersson, Jenny, Österberg, Malin January 2010 (has links)
Kina har de senaste åren genomgått en omfattande utveckling på det informationstekniska området och den ekonomiska tillväxten i landet har ökat explosionsartat. Medieklimatet och den kinesiska statens kontroll över Internet genomgår i och med det här stora förändringar. Trots att Internet skapar en till synes större frihet och ger nya möjligheter, använder den kinesiska staten Internet för att ytterligare öka sin kontroll över informationsflödet i landet. Genom cyberattacker från kinesiska IP-adresser mot Internetjätten Google i början av 2010 lyftes Kinas Internetpolitik fram även i internationella medier, inklusive svenska. Genom att undersöka engelskspråkiga kinesiska nyhetsartiklar i dagstidningar på Internet, om Google och Internetcensuren i Kina, ämnade vi granska hur regimsupport respektive regimkritik yttrar sig i två kinesiska nyhetstidningar. Våra frågeställningar innefattade också; hur statens ståndpunkt i Google- och censurfrågan märks i nyhetsartiklarna? Om rapporteringen skiljer sig mellan den privatiserade, politiskt obundna dagstidningen South China Morning Post (SCMP) från Hongkong och statens egen nyhetstidning People’s Daily (PD) på fastlandet? Och i sådana fall hur? Vi ämnade bemöta dessa frågor utifrån tre teoretiska element: ett ekonomiskt perspektiv; ett politiskt/demokratiskt; samt ett tekniskt/praktiskt. Dessa behandlar hur den kinesiska staten utövar och bibehåller sin kontroll över nyhetsflödet på Internet. Genom en kritisk diskursanalys och Faircloughs tredimensionella modell studerade vi artiklarna utifrån tre nivåer: text, diskursiv praktik och social praktik. Fokus ligger på den första nivån, texten, och den studerade vi grundligt genom en kritisk lingvistisk analys. Vår undersökning visade att uttalad regimkritik inte förekommer i vare sig PD eller i SCMP. Den skillnad som vi fann mellan tidningarna handlar främst om vem som uttalar sig i artiklarna, samt att det i PD förekommer en närvarande skribent som värderar och kommenterar uppgifterna som framkommer i artikeln. I PD får representanter för Google sällan komma till tals medan detta förekommer i SCMP, men ansvaret för vad de säger läggs på källan och inte på skribenten. SCMP tar alltså inte ställning i frågan, medan PD starkt visar upp kommunistpartiets profil. Från vårt resultat kan vi dra slutsatsen att den kinesiska staten fortfarande influerar nyhetsinnehållet på olika sätt, bland annat genom den självcensur som tidningarna utövar. Vi upplever detta genom att SCMP inte ställer sig kritisk mot staten trots att det är ett privatägt och politiskt oberoende företag. Ur ett större perspektiv visar resultatet att medieklimatet i Kina inte är entydigt, utan det finns vissa geografiska skillnader till exempel mellan det kinesiska fastlandet och Hongkong.
354

Talents : The Key for Successful Organizations

Ballesteros Rodriguez, Sara, de la Fuente Escobar, Inmaculada January 2010 (has links)
Taking into account the rapidly changing of the environment nowadays and the necessity of being different between organizations, this paper tries to show how to achieve a sustainable competitive advantage in companies, through talented people using talent management strategies. Here is included all theoretical framework where we will explain our understanding of talent management, talented people and the creativity as a talent. This framework gives us the tools needed to be able to analyse a real talent management strategy. During the analysis we will discover that a talent management strategy has to be fitted with the corporate strategy and with the corporate culture and also, that there are infinite ways to develop the talent management activities, it depends on the organization which develops it. For instance we are going to study two companies, Zerogrey and Google, which are very different between them but both of them have a talent management strategy.
355

Informationsöverflöd på den svenska fondmarknaden? / Information overload on the Swedish investment fund market?

Bank, Joakim, Jansson, Eric January 2009 (has links)
No description available.
356

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
357

A critical evaluation of two on-line machine translation systems : Google & Systran / Google and Systran

Wang, Shuang January 2010 (has links)
University of Macau / Faculty of Social Sciences and Humanities / Department of English
358

Development of a riverbank asset management system for the city of Winnipeg

James, Alena 07 April 2009 (has links)
The City of Winnipeg, located at the confluence of the Red and Assiniboine Rivers, has over 240 km of natural riverbank property. The increased frequency and magnitude of flooding along the Red and Assiniboine Rivers over the past decade appears to have influenced the number of slope failures along riverbank property, resulting in the loss of both public green space and privately owned land. The loss of private and public property adjacent to the river has led to the loss of valuable real estate and public parkland amenities. To ensure that riverbank property is preserved for future generations, the City of Winnipeg wants to increase the stability of certain reaches of publicly owned riverbank property along the Red and Assiniboine Rivers that are prone to slope movements. Extensive research has been conducted on slope stability problems in the Winnipeg area, but a transparent prioritization procedure for the remediation of riverbank stability problems has not existed. Therefore, a Riverbank Asset Management System (RAMS) was developed for publicly owned riverbank property to prioritize riverbank slope stability problems along the Red and Assiniboine Rivers. The RAMS provides the City of Winnipeg with a rational approach for determining risk levels for specific reaches of the Red and Assiniboine Rivers. The calculated risk levels allow the City to develop recommended response levels for slope stability remediation projects in a fiscally responsible manner with minimal personal and political influences. This system permits the City to facilitate timely and periodic reviews of priority sites as riverbank conditions and input parameters change. / May 2009
359

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
360

“Översätt den här sidan” : The advancement of Google Translate and how it performs in the online translation of compound and proper nouns from Swedish into English

Stefansson, Ida January 2011 (has links)
The English translation of the Swedish compound fönsterbräda into windowsill, or the proper noun Danmark into Denmark makes perfect sense. But how about the compound fossilbränslefri as simply fossil fuel or the name Mälaren as Lake?  All four of these translations have been produced with the help of automatic machine translation. The aim of this paper is to present the expanding field of application of machine translation and some issues related to this type of translation. More specifically, the study has looked at Google Translate as one of the most commonly used machine translation systems online, and how it responds to the two linguistic categories that were selected for this small study: compound nouns and proper nouns. Besides analyzing these categories, two different text types were chosen: general information articles from a local authority website (Stockholm City) and patent texts, both of which belong to the expanding field of application of Google Translate. The results of the study show that in terms of compound nouns, neither of the text types proved to be significantly better suited for machine translation than the other and neither had an error rate below 20 %. Most of the errors related to words being erroneously omitted in the English output and words which were incorrectly translated in relation to context. As for proper nouns, the patent texts contained none and subsequently no error analysis could be made, whereas the general information articles included 76 proper nouns (out of a total word count of 810). The most prominent error related to the Swedish version not being maintained in the English output where it should have been, e.g. translating Abrahamsberg as Abraham rock. The errors in both of the linguistic categories had varying impact on the meaning of the texts, some of which distorted the meaning of the word completely, and some which were of minor importance. This factor, along with the fact that the reader of the text influences how the comprehension level of the text is perceived through their language and subject knowledge, makes it difficult to evaluate the full impact of the various errors. It can, however, be said that patent text could pose as a better option for machine translation than general information articles in relation to proper nouns, as this text type is likely to contain no or very few proper nouns.

Page generated in 0.0315 seconds