• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 6
  • 6
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 71
  • 40
  • 30
  • 24
  • 20
  • 16
  • 16
  • 12
  • 12
  • 11
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Tracing a Technological God: A Psychoanalytic Study of Google and the Global Ramifications of its Media Proliferation

Unknown Date (has links)
This dissertation makes the connection between the human drive, as described by psychoanalysis, to construct God and the construction of the technological entity, Google. Google constitutes the extension of the early Christian period God to the twenty-first century. From the examination of significant religious and theological texts by significant theologians (Augustine, Thomas, Luther, Calvin, etc.) that explain the nature of God, the analogous relationship of God to Google will open a psychoanalytic discourse that answers questions on the current state of human mediation with the world. Freud and, more significantly, Lacan’s work connects the human creation of God, ex nihilio, to Google’s godly qualities and behaviors (omniscience, omnipotence, omnipresence, and omnibenevolence). This illustrates the powerful motivation behind the creation of an all-encompassing physical / earthly entity that includes the immaterial properties of God. Essentially, Google operates as the extension or replacement of the long reigning God in Western culture. Furthermore, the advent of science and technology through rationalism (as outlined by Nietzsche) results in the death of the metaphysical God and the ascension of the technological God. Google offers an appropriate example for study. Moreover, the work of Jean Baudrillard and Marshall McLuhan will further comment on Google as the technological manifestation of God, particularly in its media formulations. Finally, this dissertation concludes with a review that highlights future research with an exploration that foresees the death of Google from the same rational method of inquiry by which the death of God occurred at the end of the nineteenth century. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2017. / FAU Electronic Theses and Dissertations Collection
62

The mass collaboration of human flesh search in China

Ge, Shuai January 2011 (has links)
University of Macau / Faculty of Social Sciences and Humanities / Department of Communication
63

An investigation into the web searching strategies used by postgraduate students at the University of KwaZulu-Natal, Pietermaritzburg campus.

Civilcharran, Surika. 01 November 2013 (has links)
The purpose of this mixed methods study was to investigate the Web search strategies used to retrieve information from the Web by postgraduate students at the University of KwaZulu-Natal, Pietermaritzburg campus in order to address the weaknesses of undergraduate students with regard to their Web searching strategies. The study attempted to determine the Web search tactics used by postgraduate students, the Web search strategies (i.e. combinations of tactics) they used, how they determined whether their searches were successful and the search tool they preferred. In addition, the study attempted to contribute toward building a set of best practices when searching the Web. The sample population consisted of 331 postgraduate students, yielding a response rate of 95%. The study involved a two-phased approach adopting a survey in Phase 1 and interviews in the Phase 2. Proportionate stratified random sampling was used and the population was divided into five mutually exclusive groups (i.e., postgraduate diploma, postgraduate certificate, Honours, Master’s and PhD). A pre-test was conducted with ten postgraduate students from the Pietermaritzburg campus. The study revealed that the majority of postgraduate students have been searching the Web for six years or longer and that most postgraduate students searched the Web for information from five to less than ten hours a week. Most respondents gained their knowledge on Web searching through experience and only a quarter of the respondents have been given formal training on Web searching. The Web searching strategies explored contribute to the best practices with regard to Web search strategies, as interviewees were selected based on the highest number of search tactics used and they have several years of searching experience. The study was also able to identify the most preferred Web search tool. It is envisaged that undergraduate students can potentially follow these search strategies to improve their information retrieval. This finding could also be beneficial to librarians in developing training modules that assist undergraduate students to use these Web search tools more efficiently. The final outcome of the study was an adaptation Bates’ (1979) model of Information Search Tactics to suit information searching on the Web. / Thesis (M.Com.)-University of KwaZulu-Natal, Pietermaritzburg, 2012.
64

Dealing with Geographic Information in Location-Based Search Engines

Mr Saeid Asadi Unknown Date (has links)
No description available.
65

Assigning related categories to user queries

He, Miao. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Department of Computer Science, Thomas J. Watson School of Engineering and Applied Science, 2006. / Includes bibliographical references.
66

Evaluating and comparing search engines in retrieving text information from the web

Weldeghebriel, Zemichael Fesahatsion 03 1900 (has links)
Thesis (MPhil)--Stellenbosch University, 2004 / ENGLISH ABSTRACT: With the introduction of the Internet and the World Wide Web (www), information can be easily accessed and retrieved from the web using information retrieval systems such as web search engines or simply search engines. There are a number of search engines that have been developed to provide access to the resources available on the web and to help users in retrieving relevant information from the web. In particular, they are essential for finding text information on the web for academic purposes. But, how effective and efficient are those search engines in retrieving the most relevant text information from the web? Which of the search engines are more effective and efficient? So, this study was conducted to see how effective and efficient search engines are and to see which search engines are most effective and efficient in retrieving the required text information from the web. It is very important to know the most effective and efficient search engines because such search engines can be used to retrieve a higher number of the most relevant text web pages with minimum time and effort. The study was based on nine major search engines, four search queries and relevancy judgments as relevant/partly-relevanUnon-relevant. Precision and recall were calculated based on the experimental or test results and these were used as basis for the statistical evaluation and comparisons of the retrieval effectiveness of the nine search engines. Duplicated items and broken links were also recorded and examined separately and were used as an additional measure of search engine effectiveness. A response time was also recorded and used as a base for the statistical evaluation and comparisons of the retrieval efficiency of the nine search engines. Additionally, since search engines involve indexing and searching in the information retrieval processes from the web, this study first discusses, from the theoretical point of view, how the indexing and searching processes are performed in an information retrieval environment. It also discusses the influences of indexing and searching processes on the effectiveness and efficiency of information retrieval systems in general and search engines in particular in retrieving the most relevant text information from the web. / AFRIKAANSE OPSOMMING: Met die koms van die Internet en die Wêreldwye Web (www) is inligting maklik bekombaar. Dit kan herwin word deur gebruik te maak van inligtingherwinningsisteme soos soekenjins. Daar is 'n hele aantal sulke soekenjins wat ontwikkel is om toegang te verleen tot die hulpbronne beskikbaar op die web en om gebruikers te help om relevante inligting vanaf die web in te win. Dit is veral noodsaaklik vir die verkryging van teksinligting vir akademiese doeleindes. Maar hoe effektief en doelmatig is die soekenjins in die herwinning van die mees relevante teksinligting vanaf die web? Watter van die soekenjins is die effektiefste? Hierdie studie is onderneem om te kyk watter soekenjins die effektiefste en doelmatigste is in die herwinning van die nodige teksinligting. Dit is belangrik om te weet watter soekenjin die effektiefste is want so 'n enjin kan gebruik word om 'n hoër getal van die mees relevante tekswebblaaie met die minimum van tyd en moeite te herwin. Heirdie studie is baseer op die sewe hoofsoekenjins, vier soektogte, en toepasliksheidsoordele soos relevant /gedeeltelik relevant/ en nie- relevant. Presiesheid en herwinningsvermoë is bereken baseer op die eksperimente en toetsresultate en dit is gebruik as basis vir statistiese evaluasie en vergelyking van die herwinningseffektiwiteit van die nege soekenjins. Gedupliseerde items en gebreekte skakels is ook aangeteken en apart ondersoek en is gebruik as bykomende maatstaf van effektiwiteit. Die reaksietyd is ook aangeteken en is gebruik as basis vir statistiese evaluasie en die vergelyking van die herwinningseffektiwiteit van die nege soekenjins. Aangesien soekenjins betrokke is by indeksering en soekprosesse, bespreek hierdie studie eers uit 'n teoretiese oogpunt, hoe indeksering en soekprosesse uitgevoer word in 'n inligtingherwinningsomgewing. Die invloed van indeksering en soekprosesse op die doeltreffendheid van herwinningsisteme in die algemeen en veral van soekenjins in die herwinning van die mees relevante teksinligting vanaf die web, word ook bespreek.
67

Entertainics

Garza, Jesus Mario Torres 01 January 2003 (has links)
Entertainics is a web-based software application used to gather information about DVD players from several web-sites on the internet. The purpose of this software is to help users search for DVD players in a faster and easier way, by avoiding the navigation on every web-site that contains this product.
68

The use of browser based resources for literature searches in the postgraduate cohort of the Faculty of Humanities, Development and Social Sciences (HDSS) at the Howard College Campus of the University of KwaZulu-Natal.

Woodcock-Reynolds, Hilary Julian. January 2011 (has links)
The research reflected here examined in depth how one cohort of learners viewed and engaged in literature searches using web browser based resources. Action research was employed using a mixed methods approach. The research started with a survey followed by interviews and a screencast examining practice based on a series of search related exercises. These were analysed and used as data to establish what deficits in using the web to search for literature existed in the target group. Based on the analysis of these instruments, the problem was redefined and a workshop intended to help remediate deficiencies uncovered was run. Based on this a recommendation is made that a credit bearing course teaching digital research literacy be made available which would include information literacy as a component. / Thesis (M.A.)-University of KwaZulu-Natal, Durban, 2011.
69

Inducing Conceptual User Models

Müller, Martin Eric 29 April 2002 (has links)
User Modeling and Machine Learning for User Modeling have both become important research topics and key techniques in recent adaptive systems. One of the most intriguing problems in the `information age´ is how to filter relevant information from the huge amount of available data. This problem is tackled by using models of the user´s interest in order to increase precision and discriminate interesting information from un-interesting data. However, any user modeling approach suffers from several major drawbacks: User models built by the system need to be inspectable and understandable by the user himself. Secondly, users in general are not willing to give feedback concerning user satisfaction by the delivered results. Without any evidence for the user´s interest, it is hard to induce a hypothetical user model at all. Finally, most current systems do not draw a line of distinction between domain knowledge and user model which makes the adequacy of a user model hard to determine. This thesis presents the novel approach of conceptual user models. Conceptual user models are easy to inspect and understand and allow for the system to explain its actions to the user. It is shown, that ILP can be applied for the task of inducing user models from feedback, and a method for using mutual feedback for sample enlargement is introduced. Results are evaluated independently of domain knowledge within a clear machine learning problem definition. The whole concept presented is realized in a meta web search engine called OySTER.
70

Search Interaction Optimization: A Human-Centered Design Approach

Speicher, Maximilian 20 September 2016 (has links)
Over the past 25 years, search engines have become one of the most important, if not the entry point of the World Wide Web. This development has been primarily due to the continuously increasing amount of available documents, which are highly unstructured. Moreover, the general trend is towards classifying search results into categories and presenting them in terms of semantic information that answer users' queries without having to leave the search engine. With the growing amount of documents and technological enhancements, the needs of users as well as search engines are continuously evolving. Users want to be presented with increasingly sophisticated results and interfaces while companies have to place advertisements and make revenue to be able to offer their services for free. To address the above needs, it is more and more important to provide highly usable and optimized search engine results pages (SERPs). Yet, existing approaches to usability evaluation are often costly or time-consuming and mostly rely on explicit feedback. They are either not efficient or not effective while SERP interfaces are commonly optimized primarily from a company's point of view. Moreover, existing approaches to predicting search result relevance, which are mostly based on clicks, are not tailored to the evolving kinds of SERPs. For instance, they fail if queries are answered directly on a SERP and no clicks need to happen. Applying Human-Centered Design principles, we propose a solution to the above in terms of a holistic approach that intends to satisfy both, searchers and developers. It provides novel means to counteract exclusively company-centric design and to make use of implicit user feedback for efficient and effective evaluation and optimization of usability and, in particular, relevance. We define personas and scenarios from which we infer unsolved problems and a set of well-defined requirements. Based on these requirements, we design and develop the Search Interaction Optimization toolkit. Using a bottom-up approach, we moreover define an eponymous, higher-level methodology. The Search Interaction Optimization toolkit comprises a total of six components. We start with INUIT [1], which is a novel minimal usability instrument specifically aiming at meaningful correlations with implicit user feedback in terms of client-side interactions. Hence, it serves as a basis for deriving usability scores directly from user behavior. INUIT has been designed based on reviews of established usability standards and guidelines as well as interviews with nine dedicated usability experts. Its feasibility and effectiveness have been investigated in a user study. Also, a confirmatory factor analysis shows that the instrument can reasonably well describe real-world perceptions of usability. Subsequently, we introduce WaPPU [2], which is a context-aware A/B testing tool based on INUIT. WaPPU implements the novel concept of Usability-based Split Testing and enables automatic usability evaluation of arbitrary SERP interfaces based on a quantitative score that is derived directly from user interactions. For this, usability models are automatically trained and applied based on machine learning techniques. In particular, the tool is not restricted to evaluating SERPs, but can be used with any web interface. Building on the above, we introduce S.O.S., the SERP Optimization Suite [3], which comprises WaPPU as well as a catalog of best practices [4]. Once it has been detected that an investigated SERP's usability is suboptimal based on scores delivered by WaPPU, corresponding optimizations are automatically proposed based on the catalog of best practices. This catalog has been compiled in a three-step process involving reviews of existing SERP interfaces and contributions by 20 dedicated usability experts. While the above focus on the general usability of SERPs, presenting the most relevant results is specifically important for search engines. Hence, our toolkit contains TellMyRelevance! (TMR) [5] — the first end-to-end pipeline for predicting search result relevance based on users’ interactions beyond clicks. TMR is a fully automatic approach that collects necessary information on the client, processes it on the server side and trains corresponding relevance models based on machine learning techniques. Predictions made by these models can then be fed back into the ranking process of the search engine, which improves result quality and hence also usability. StreamMyRelevance! (SMR) [6] takes the concept of TMR one step further by providing a streaming-based version. That is, SMR collects and processes interaction data and trains relevance models in near real-time. Based on a user study and large-scale log analysis involving real-world search engines, we have evaluated the components of the Search Interaction Optimization toolkit as a whole—also to demonstrate the interplay of the different components. S.O.S., WaPPU and INUIT have been engaged in the evaluation and optimization of a real-world SERP interface. Results show that our tools are able to correctly identify even subtle differences in usability. Moreover, optimizations proposed by S.O.S. significantly improved the usability of the investigated and redesigned SERP. TMR and SMR have been evaluated in a GB-scale interaction log analysis as well using data from real-world search engines. Our findings indicate that they are able to yield predictions that are better than those of competing state-of-the-art systems considering clicks only. Also, a comparison of SMR to existing solutions shows its superiority in terms of efficiency, robustness and scalability. The thesis concludes with a discussion of the potential and limitations of the above contributions and provides an overview of potential future work. / Im Laufe der vergangenen 25 Jahre haben sich Suchmaschinen zu einem der wichtigsten, wenn nicht gar dem wichtigsten Zugangspunkt zum World Wide Web (WWW) entwickelt. Diese Entwicklung resultiert vor allem aus der kontinuierlich steigenden Zahl an Dokumenten, welche im WWW verfügbar, jedoch sehr unstrukturiert organisiert sind. Überdies werden Suchergebnisse immer häufiger in Kategorien klassifiziert und in Form semantischer Informationen bereitgestellt, die direkt in der Suchmaschine konsumiert werden können. Dies spiegelt einen allgemeinen Trend wider. Durch die wachsende Zahl an Dokumenten und technologischen Neuerungen wandeln sich die Bedürfnisse von sowohl Nutzern als auch Suchmaschinen ständig. Nutzer wollen mit immer besseren Suchergebnissen und Interfaces versorgt werden, während Suchmaschinen-Unternehmen Werbung platzieren und Gewinn machen müssen, um ihre Dienste kostenlos anbieten zu können. Damit geht die Notwendigkeit einher, in hohem Maße benutzbare und optimierte Suchergebnisseiten – sogenannte SERPs (search engine results pages) – für Nutzer bereitzustellen. Gängige Methoden zur Evaluierung und Optimierung von Usability sind jedoch größtenteils kostspielig oder zeitaufwändig und basieren meist auf explizitem Feedback. Sie sind somit entweder nicht effizient oder nicht effektiv, weshalb Optimierungen an Suchmaschinen-Schnittstellen häufig primär aus dem Unternehmensblickwinkel heraus durchgeführt werden. Des Weiteren sind bestehende Methoden zur Vorhersage der Relevanz von Suchergebnissen, welche größtenteils auf der Auswertung von Klicks basieren, nicht auf neuartige SERPs zugeschnitten. Zum Beispiel versagen diese, wenn Suchanfragen direkt auf der Suchergebnisseite beantwortet werden und der Nutzer nicht klicken muss. Basierend auf den Prinzipien des nutzerzentrierten Designs entwickeln wir eine Lösung in Form eines ganzheitlichen Ansatzes für die oben beschriebenen Probleme. Dieser Ansatz orientiert sich sowohl an Nutzern als auch an Entwicklern. Unsere Lösung stellt automatische Methoden bereit, um unternehmenszentriertem Design entgegenzuwirken und implizites Nutzerfeedback für die effizienteund effektive Evaluierung und Optimierung von Usability und insbesondere Ergebnisrelevanz nutzen zu können. Wir definieren Personas und Szenarien, aus denen wir ungelöste Probleme und konkrete Anforderungen ableiten. Basierend auf diesen Anforderungen entwickeln wir einen entsprechenden Werkzeugkasten, das Search Interaction Optimization Toolkit. Mittels eines Bottom-up-Ansatzes definieren wir zudem eine gleichnamige Methodik auf einem höheren Abstraktionsniveau. Das Search Interaction Optimization Toolkit besteht aus insgesamt sechs Komponenten. Zunächst präsentieren wir INUIT [1], ein neuartiges, minimales Instrument zur Bestimmung von Usability, welches speziell auf sinnvolle Korrelationen mit implizitem Nutzerfeedback in Form Client-seitiger Interaktionen abzielt. Aus diesem Grund dient es als Basis für die direkte Herleitung quantitativer Usability-Bewertungen aus dem Verhalten von Nutzern. Das Instrument wurde basierend auf Untersuchungen etablierter Usability-Standards und -Richtlinien sowie Experteninterviews entworfen. Die Machbarkeit und Effektivität der Benutzung von INUIT wurden in einer Nutzerstudie untersucht und darüber hinaus durch eine konfirmatorische Faktorenanalyse bestätigt. Im Anschluss beschreiben wir WaPPU [2], welches ein kontextsensitives, auf INUIT basierendes Tool zur Durchführung von A/B-Tests ist. Es implementiert das neuartige Konzept des Usability-based Split Testing und ermöglicht die automatische Evaluierung der Usability beliebiger SERPs basierend auf den bereits zuvor angesprochenen quantitativen Bewertungen, welche direkt aus Nutzerinteraktionen abgeleitet werden. Hierzu werden Techniken des maschinellen Lernens angewendet, um automatisch entsprechende Usability-Modelle generieren und anwenden zu können. WaPPU ist insbesondere nicht auf die Evaluierung von Suchergebnisseiten beschränkt, sondern kann auf jede beliebige Web-Schnittstelle in Form einer Webseite angewendet werden. Darauf aufbauend beschreiben wir S.O.S., die SERP Optimization Suite [3], welche das Tool WaPPU sowie einen neuartigen Katalog von „Best Practices“ [4] umfasst. Sobald eine durch WaPPU gemessene, suboptimale Usability-Bewertung festgestellt wird, werden – basierend auf dem Katalog von „Best Practices“ – automatisch entsprechende Gegenmaßnahmen und Optimierungen für die untersuchte Suchergebnisseite vorgeschlagen. Der Katalog wurde in einem dreistufigen Prozess erarbeitet, welcher die Untersuchung bestehender Suchergebnisseiten sowie eine Anpassung und Verifikation durch 20 Usability-Experten beinhaltete. Die bisher angesprochenen Tools fokussieren auf die generelle Usability von SERPs, jedoch ist insbesondere die Darstellung der für den Nutzer relevantesten Ergebnisse eminent wichtig für eine Suchmaschine. Da Relevanz eine Untermenge von Usability ist, beinhaltet unser Werkzeugkasten daher das Tool TellMyRelevance! (TMR) [5], die erste End-to-End-Lösung zur Vorhersage von Suchergebnisrelevanz basierend auf Client-seitigen Nutzerinteraktionen. TMR ist einvollautomatischer Ansatz, welcher die benötigten Daten auf dem Client abgreift, sie auf dem Server verarbeitet und entsprechende Relevanzmodelle bereitstellt. Die von diesen Modellen getroffenen Vorhersagen können wiederum in den Ranking-Prozess der Suchmaschine eingepflegt werden, was schlussendlich zu einer Verbesserung der Usability führt. StreamMyRelevance! (SMR) [6] erweitert das Konzept von TMR, indem es einen Streaming-basierten Ansatz bereitstellt. Hierbei geschieht die Sammlung und Verarbeitung der Daten sowie die Bereitstellung der Relevanzmodelle in Nahe-Echtzeit. Basierend auf umfangreichen Nutzerstudien mit echten Suchmaschinen haben wir den entwickelten Werkzeugkasten als Ganzes evaluiert, auch, um das Zusammenspiel der einzelnen Komponenten zu demonstrieren. S.O.S., WaPPU und INUIT wurden zur Evaluierung und Optimierung einer realen Suchergebnisseite herangezogen. Die Ergebnisse zeigen, dass unsere Tools in der Lage sind, auch kleine Abweichungen in der Usability korrekt zu identifizieren. Zudem haben die von S.O.S.vorgeschlagenen Optimierungen zu einer signifikanten Verbesserung der Usability der untersuchten und überarbeiteten Suchergebnisseite geführt. TMR und SMR wurden mit Datenmengen im zweistelligen Gigabyte-Bereich evaluiert, welche von zwei realen Hotelbuchungsportalen stammen. Beide zeigen das Potential, bessere Vorhersagen zu liefern als konkurrierende Systeme, welche lediglich Klicks auf Ergebnissen betrachten. SMR zeigt gegenüber allen anderen untersuchten Systemen zudem deutliche Vorteile bei Effizienz, Robustheit und Skalierbarkeit. Die Dissertation schließt mit einer Diskussion des Potentials und der Limitierungen der erarbeiteten Forschungsbeiträge und gibt einen Überblick über potentielle weiterführende und zukünftige Forschungsarbeiten.

Page generated in 0.0956 seconds