• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 21
  • 9
  • 7
  • 7
  • 7
  • 7
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 185
  • 185
  • 112
  • 62
  • 58
  • 52
  • 39
  • 38
  • 24
  • 23
  • 23
  • 19
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Multi-Agent User-Centric Specialization and Collaboration for Information Retrieval

Mooman, Abdelniser January 2012 (has links)
The amount of information on the World Wide Web (WWW) is rapidly growing in pace and topic diversity. This has made it increasingly difficult, and often frustrating, for information seekers to retrieve the content they are looking for as information retrieval systems (e.g., search engines) are unable to decipher the relevance of the retrieved information as it pertains to the information they are searching for. This issue can be decomposed into two aspects: 1) variability of information relevance as it pertains to an information seeker. In other words, different information seekers may enter the same search text, or keywords, but expect completely different results. It is therefore, imperative that information retrieval systems possess an ability to incorporate a model of the information seeker in order to estimate the relevance and context of use of information before presenting results. Of course, in this context, by a model we mean the capture of trends in the information seeker's search behaviour. This is what many researchers refer to as the personalized search. 2) Information diversity. Information available on the World Wide Web today spans multitudes of inherently overlapping topics, and it is difficult for any information retrieval system to decide effectively on the relevance of the information retrieved in response to an information seeker's query. For example, the information seeker who wishes to use WWW to learn about a cure for a certain illness would receive a more relevant answer if the search engine was optimized into such domains of topics. This is what is being referred to in the WWW nomenclature as a 'specialized search'. This thesis maintains that the information seeker's search is not intended to be completely random and therefore tends to portray itself as consistent patterns of behaviour. Nonetheless, this behaviour, despite being consistent, can be quite complex to capture. To accomplish this goal the thesis proposes a Multi-Agent Personalized Information Retrieval with Specialization Ontology (MAPIRSO). MAPIRSO offers a complete learning framework that is able to model the end user's search behaviour and interests and to organize information into categorized domains so as to ensure maximum relevance of its responses as they pertain to the end user queries. Specialization and personalization are accomplished using a group of collaborative agents. Each agent employs a Reinforcement Learning (RL) strategy to capture end user's behaviour and interests. Reinforcement learning allows the agents to evolve their knowledge of the end user behaviour and interests as they function to serve him or her. Furthermore, REL allows each agent to adapt to changes in an end user's behaviour and interests. Specialization is the process by which new information domains are created based on existing information topics, allowing new kinds of content to be built exclusively for information seekers. One of the key characteristics of specialization domains is the seeker centric - which allows intelligent agents to create new information based on the information seekers' feedback and their behaviours. Specialized domains are created by intelligent agents that collect information from a specific domain topic. The task of these specialized agents is to map the user's query to a repository of specific domains in order to present users with relevant information. As a result, mapping users' queries to only relevant information is one of the fundamental challenges in Artificial Intelligent (AI) and machine learning research. Our approach employs intelligent cooperative agents that specialize in building personalized ontology information domains that pertain to each information seeker's specific needs. Specializing and categorizing information into unique domains is one of the challenge areas that have been addressed and various proposed solutions were evaluated and adopted to address growing information. However, categorizing information into unique domains does not satisfy each individualized information seeker. Information seekers might search for similar topics, but each would have different interests. For example, medical information of a specific medical domain has different importance to both the doctor and patients. The thesis presents a novel solution that will resolve the growing and diverse information by building seeker centric specialized information domains that are personalized through the information seekers' feedback and behaviours. To address this challenge, the research examines the fundamental components that constitute the specialized agent: an intelligent machine learning system, user input queries, an intelligent agent, and information resources constructed through specialized domains. Experimental work is reported to demonstrate the efficiency of the proposed solution in addressing the overlapping information growth. The experimental work utilizes extensive user-centric specialized domain topics. This work employs personalized and collaborative multi learning agents and ontology techniques thereby enriching the queries and domains of the user. Therefore, experiments and results have shown that building specialized ontology domains, pertinent to the information seekers' needs, are more precise and efficient compared to other information retrieval applications and existing search engines.
172

The use of browser based resources for literature searches in the postgraduate cohort of the Faculty of Humanities, Development and Social Sciences (HDSS) at the Howard College Campus of the University of KwaZulu-Natal.

Woodcock-Reynolds, Hilary Julian. January 2011 (has links)
The research reflected here examined in depth how one cohort of learners viewed and engaged in literature searches using web browser based resources. Action research was employed using a mixed methods approach. The research started with a survey followed by interviews and a screencast examining practice based on a series of search related exercises. These were analysed and used as data to establish what deficits in using the web to search for literature existed in the target group. Based on the analysis of these instruments, the problem was redefined and a workshop intended to help remediate deficiencies uncovered was run. Based on this a recommendation is made that a credit bearing course teaching digital research literacy be made available which would include information literacy as a component. / Thesis (M.A.)-University of KwaZulu-Natal, Durban, 2011.
173

Multi-Agent User-Centric Specialization and Collaboration for Information Retrieval

Mooman, Abdelniser January 2012 (has links)
The amount of information on the World Wide Web (WWW) is rapidly growing in pace and topic diversity. This has made it increasingly difficult, and often frustrating, for information seekers to retrieve the content they are looking for as information retrieval systems (e.g., search engines) are unable to decipher the relevance of the retrieved information as it pertains to the information they are searching for. This issue can be decomposed into two aspects: 1) variability of information relevance as it pertains to an information seeker. In other words, different information seekers may enter the same search text, or keywords, but expect completely different results. It is therefore, imperative that information retrieval systems possess an ability to incorporate a model of the information seeker in order to estimate the relevance and context of use of information before presenting results. Of course, in this context, by a model we mean the capture of trends in the information seeker's search behaviour. This is what many researchers refer to as the personalized search. 2) Information diversity. Information available on the World Wide Web today spans multitudes of inherently overlapping topics, and it is difficult for any information retrieval system to decide effectively on the relevance of the information retrieved in response to an information seeker's query. For example, the information seeker who wishes to use WWW to learn about a cure for a certain illness would receive a more relevant answer if the search engine was optimized into such domains of topics. This is what is being referred to in the WWW nomenclature as a 'specialized search'. This thesis maintains that the information seeker's search is not intended to be completely random and therefore tends to portray itself as consistent patterns of behaviour. Nonetheless, this behaviour, despite being consistent, can be quite complex to capture. To accomplish this goal the thesis proposes a Multi-Agent Personalized Information Retrieval with Specialization Ontology (MAPIRSO). MAPIRSO offers a complete learning framework that is able to model the end user's search behaviour and interests and to organize information into categorized domains so as to ensure maximum relevance of its responses as they pertain to the end user queries. Specialization and personalization are accomplished using a group of collaborative agents. Each agent employs a Reinforcement Learning (RL) strategy to capture end user's behaviour and interests. Reinforcement learning allows the agents to evolve their knowledge of the end user behaviour and interests as they function to serve him or her. Furthermore, REL allows each agent to adapt to changes in an end user's behaviour and interests. Specialization is the process by which new information domains are created based on existing information topics, allowing new kinds of content to be built exclusively for information seekers. One of the key characteristics of specialization domains is the seeker centric - which allows intelligent agents to create new information based on the information seekers' feedback and their behaviours. Specialized domains are created by intelligent agents that collect information from a specific domain topic. The task of these specialized agents is to map the user's query to a repository of specific domains in order to present users with relevant information. As a result, mapping users' queries to only relevant information is one of the fundamental challenges in Artificial Intelligent (AI) and machine learning research. Our approach employs intelligent cooperative agents that specialize in building personalized ontology information domains that pertain to each information seeker's specific needs. Specializing and categorizing information into unique domains is one of the challenge areas that have been addressed and various proposed solutions were evaluated and adopted to address growing information. However, categorizing information into unique domains does not satisfy each individualized information seeker. Information seekers might search for similar topics, but each would have different interests. For example, medical information of a specific medical domain has different importance to both the doctor and patients. The thesis presents a novel solution that will resolve the growing and diverse information by building seeker centric specialized information domains that are personalized through the information seekers' feedback and behaviours. To address this challenge, the research examines the fundamental components that constitute the specialized agent: an intelligent machine learning system, user input queries, an intelligent agent, and information resources constructed through specialized domains. Experimental work is reported to demonstrate the efficiency of the proposed solution in addressing the overlapping information growth. The experimental work utilizes extensive user-centric specialized domain topics. This work employs personalized and collaborative multi learning agents and ontology techniques thereby enriching the queries and domains of the user. Therefore, experiments and results have shown that building specialized ontology domains, pertinent to the information seekers' needs, are more precise and efficient compared to other information retrieval applications and existing search engines.
174

Návrh obchodně úspěšného webu a trendy v online marketingu / Design of a commercially successful website and online marketing trends

JONÁŠOVÁ, Jana January 2016 (has links)
The aim of this thesis is to show how to design a commercially successful website and what are the possibilities of its succesive promotion. First a user research and analysis of competition is done and on the basis of this a new website is designed and created. After its implementation this site is then supported by several online marketing tools.
175

Improving the Visibility and the Accessibility of Web Services. A User-Centric Approach.

Drivas, Ioannis C. January 2017 (has links)
The World Wide Web provides a well standing environment in any kind of organizations for exposing online products and services. However, no one ensures that web products or services which provided by organizations or enterprises, would receive the proper visibility and accessibility by the internet users. The process of Search Engine Optimization examines usability in design, architecture and content that an internet-based system has, for improving its visibility and accessibility in the web. Successful SEO process in an internet-based system, which is set under the paternity of an organization, ensures higher recognition, visibility and accessibility for the web services that the system provides to internet users. The aim of this study characterized with a trinity of axes. In the first axe, an internet-based system and the web services that provides is examined in order to understand its initial situation regarding its visibility and accessibility in the web. In the second axe, the study follows a user-centric approach on how and in what way the examined system could be improved based on its users’ needs and desires. After the encapsulation of needs and desires that the users expressed as regards the usability of the system in design, architecture and content, the third axe takes place. In the third axe, the extracted needs and desires of users are implemented in the under-examined system, in order to understand if its visibility and accessibility has improved in the World Wide Web.For the completion of this trinity of axes, the Soft Systems Methodology approach is adopted. SSM is an action-oriented process of inquiry which deals with a problematic situation from the Finding Out about the situation through the Taking Action to improve it. Following an interpretative research approach, ten semi-structured interviews take place in order to capture all the participants’ perceptions and different worldviews regarding of what are the changes that they need and desire from the examined system. Moreover, in this study, the conduction of three Workshops, constitute a cornerstone for implementing systemically desirable and culturally feasible changes where all participants can live with, in order to improve system’s visibility and accessibility in the internet world. The results indicate that the adoption of participants’ needs and desires, improved the levels of usability, visibility and accessibility of the under examined internet-based system. Overall, this study firstly contributes to expand the knowledge as regards the process of improving the visibility and accessibility of internet-based systems and their web services in the internet world, based on a user-centric approach. Secondly, this study works as a practical toolbox for any kind of organization which intends to improve the visibility and accessibility of its current or potential web services in the World Wide Web.
176

Právo na zapomnění v prostředí internetu / The right to be forgotten on the internet

Jůzová, Jana January 2016 (has links)
Diploma thesis The Right to be Forgotten on the Internet applies to the functions of Internet search engines, search algorithms and the impact of the digital footprint that on the Internet user essentially leaves. With this issue is, on the one hand, inseparably linked the protection of personal data in the online environment, on the other hand the constitutionally enshrined right to information and other fundamental rights. Not ignored should be also the risk of censorship of the Internet. An application of the right to be forgotten adds a whole new dimension to this problems. The right to be forgotten is inferred from the judgment of the European Court of Justice on 13 May 2014 in the case Costeja versus Google Spain, where an Internet user named Mario Costeja Gonzáles first succeeded with a request of removal of unflattering information about himself from results of the search engine Google. Thus a reform precedent will have a big impact on seeking information on the Internet in the future, since the pronouncement of the judgment about the removal of his personal data may ask any European Internet user. The thesis aims to analyze the issue of right to be forgotten in the context of searching for information on the Internet in the European Internet environment - it means not to be searched on the...
177

Optimizing Search Engine Field Weights with Limited Data : Offline exploration of optimal field weight combinations through regression analysis / Optimering av sökmotorers fältvikter med begränsad data : Offline-utforskning av optimala fältviktskombinationer genom regressionsanalys

Kader, Zino January 2023 (has links)
Modern search engines, particularly those utilizing the BM25 ranking algorithm, offer a multitude of tunable parameters designed to refine search results. Among these parameters, the weight of each searchable field plays a crucial role in enhancing search outcomes. Traditional methods of discovering optimal weight combinations, however, are often exploratory, demanding substantial time and risking the delivery of substandard results during testing. This thesis proposes a streamlined solution: an ordinal-regression-based model specifically engineered to identify optimal weight combinations with minimal data input, within an offline testing environment. The evaluation corpus comprises a comprehensive snapshot of a product search database from Tradera. The top $100$ search queries and corresponding search results pages on the Tradera platform were divided into a training set and an evaluation set. The model underwent iterative training on the training set, and subsequent testing on the evaluation set, with progressively increasing amounts of labeled data. This methodological approach allowed examining the model's proficiency in deriving high-performance weight combinations from limited data. The empirical experiments conducted confirmed that the proposed model successfully generated promising weight combinations, even with restricted data, and exhibited robust generalization to the evaluation dataset. In conclusion, this research substantiates the significant potential for enhancing search results by tuning searchable field weights using a regression-based model, even in data-scarce scenarios. / Moderna sökmotorer, i synnerhet sådana som använder rankningsalgoritmen BM25, erbjuder en mängd justerbara parametrar utformade för att förbättra sökresultat. Bland dessa parametrar spelar vikten av varje sökbart fält en avgörande roll för att förbättra sökresultaten. Traditionella metoder för att hitta optimala viktkombinationer är dock ofta utforskande, kräver mycket tid och riskerar att ge undermåliga sökresultat under testningsperioden. Denna avhandling föreslår en strömlinjeformad lösning: en ordinal-regressionsbaserad modell specifikt utvecklad för att identifiera optimala viktkombinationer med minimal träningsdata, inom en offline testmiljö. Utvärderingskorpus består av en omfattande ögonblicksbild av en produktsökdatabas från Tradera. De $100$ vanligaste sökfrågorna och motsvarande sökresultatssidor på Traderas plattform delades in i en träningsuppsättning och en utvärderingsuppsättning. Modellen genomgick iterativ träning på träningsuppsättningen, och därefter testning på utvärderingsuppsättningen, med successivt ökande mängder av kategoriserad data. Denna metodologiska strategi möjliggjorde undersökning av modellens förmåga att härleda högpresterande viktkombinationer från begränsad data. De empiriska experimenten som genomfördes bekräftade att den föreslagna modellen framgångsrikt genererade lovande viktkombinationer, även med begränsad data, och uppvisade robust generalisering till utvärderingsdatamängden. Sammanfattningsvis bekräftar denna forskning den betydande potentialen för förbättring av sökresultat genom att justera sökbara fältvikter med hjälp av en regressionsbaserad modell, även i datasnåla scenarion.
178

Search Interaction Optimization / Search Interaction Optimization : Ein nutzerzentrierter Design-Ansatz

Speicher, Maximilian 20 September 2016 (has links) (PDF)
Over the past 25 years, search engines have become one of the most important, if not the entry point of the World Wide Web. This development has been primarily due to the continuously increasing amount of available documents, which are highly unstructured. Moreover, the general trend is towards classifying search results into categories and presenting them in terms of semantic information that answer users' queries without having to leave the search engine. With the growing amount of documents and technological enhancements, the needs of users as well as search engines are continuously evolving. Users want to be presented with increasingly sophisticated results and interfaces while companies have to place advertisements and make revenue to be able to offer their services for free. To address the above needs, it is more and more important to provide highly usable and optimized search engine results pages (SERPs). Yet, existing approaches to usability evaluation are often costly or time-consuming and mostly rely on explicit feedback. They are either not efficient or not effective while SERP interfaces are commonly optimized primarily from a company's point of view. Moreover, existing approaches to predicting search result relevance, which are mostly based on clicks, are not tailored to the evolving kinds of SERPs. For instance, they fail if queries are answered directly on a SERP and no clicks need to happen. Applying Human-Centered Design principles, we propose a solution to the above in terms of a holistic approach that intends to satisfy both, searchers and developers. It provides novel means to counteract exclusively company-centric design and to make use of implicit user feedback for efficient and effective evaluation and optimization of usability and, in particular, relevance. We define personas and scenarios from which we infer unsolved problems and a set of well-defined requirements. Based on these requirements, we design and develop the Search Interaction Optimization toolkit. Using a bottom-up approach, we moreover define an eponymous, higher-level methodology. The Search Interaction Optimization toolkit comprises a total of six components. We start with INUIT [1], which is a novel minimal usability instrument specifically aiming at meaningful correlations with implicit user feedback in terms of client-side interactions. Hence, it serves as a basis for deriving usability scores directly from user behavior. INUIT has been designed based on reviews of established usability standards and guidelines as well as interviews with nine dedicated usability experts. Its feasibility and effectiveness have been investigated in a user study. Also, a confirmatory factor analysis shows that the instrument can reasonably well describe real-world perceptions of usability. Subsequently, we introduce WaPPU [2], which is a context-aware A/B testing tool based on INUIT. WaPPU implements the novel concept of Usability-based Split Testing and enables automatic usability evaluation of arbitrary SERP interfaces based on a quantitative score that is derived directly from user interactions. For this, usability models are automatically trained and applied based on machine learning techniques. In particular, the tool is not restricted to evaluating SERPs, but can be used with any web interface. Building on the above, we introduce S.O.S., the SERP Optimization Suite [3], which comprises WaPPU as well as a catalog of best practices [4]. Once it has been detected that an investigated SERP's usability is suboptimal based on scores delivered by WaPPU, corresponding optimizations are automatically proposed based on the catalog of best practices. This catalog has been compiled in a three-step process involving reviews of existing SERP interfaces and contributions by 20 dedicated usability experts. While the above focus on the general usability of SERPs, presenting the most relevant results is specifically important for search engines. Hence, our toolkit contains TellMyRelevance! (TMR) [5] — the first end-to-end pipeline for predicting search result relevance based on users’ interactions beyond clicks. TMR is a fully automatic approach that collects necessary information on the client, processes it on the server side and trains corresponding relevance models based on machine learning techniques. Predictions made by these models can then be fed back into the ranking process of the search engine, which improves result quality and hence also usability. StreamMyRelevance! (SMR) [6] takes the concept of TMR one step further by providing a streaming-based version. That is, SMR collects and processes interaction data and trains relevance models in near real-time. Based on a user study and large-scale log analysis involving real-world search engines, we have evaluated the components of the Search Interaction Optimization toolkit as a whole—also to demonstrate the interplay of the different components. S.O.S., WaPPU and INUIT have been engaged in the evaluation and optimization of a real-world SERP interface. Results show that our tools are able to correctly identify even subtle differences in usability. Moreover, optimizations proposed by S.O.S. significantly improved the usability of the investigated and redesigned SERP. TMR and SMR have been evaluated in a GB-scale interaction log analysis as well using data from real-world search engines. Our findings indicate that they are able to yield predictions that are better than those of competing state-of-the-art systems considering clicks only. Also, a comparison of SMR to existing solutions shows its superiority in terms of efficiency, robustness and scalability. The thesis concludes with a discussion of the potential and limitations of the above contributions and provides an overview of potential future work. / Im Laufe der vergangenen 25 Jahre haben sich Suchmaschinen zu einem der wichtigsten, wenn nicht gar dem wichtigsten Zugangspunkt zum World Wide Web (WWW) entwickelt. Diese Entwicklung resultiert vor allem aus der kontinuierlich steigenden Zahl an Dokumenten, welche im WWW verfügbar, jedoch sehr unstrukturiert organisiert sind. Überdies werden Suchergebnisse immer häufiger in Kategorien klassifiziert und in Form semantischer Informationen bereitgestellt, die direkt in der Suchmaschine konsumiert werden können. Dies spiegelt einen allgemeinen Trend wider. Durch die wachsende Zahl an Dokumenten und technologischen Neuerungen wandeln sich die Bedürfnisse von sowohl Nutzern als auch Suchmaschinen ständig. Nutzer wollen mit immer besseren Suchergebnissen und Interfaces versorgt werden, während Suchmaschinen-Unternehmen Werbung platzieren und Gewinn machen müssen, um ihre Dienste kostenlos anbieten zu können. Damit geht die Notwendigkeit einher, in hohem Maße benutzbare und optimierte Suchergebnisseiten – sogenannte SERPs (search engine results pages) – für Nutzer bereitzustellen. Gängige Methoden zur Evaluierung und Optimierung von Usability sind jedoch größtenteils kostspielig oder zeitaufwändig und basieren meist auf explizitem Feedback. Sie sind somit entweder nicht effizient oder nicht effektiv, weshalb Optimierungen an Suchmaschinen-Schnittstellen häufig primär aus dem Unternehmensblickwinkel heraus durchgeführt werden. Des Weiteren sind bestehende Methoden zur Vorhersage der Relevanz von Suchergebnissen, welche größtenteils auf der Auswertung von Klicks basieren, nicht auf neuartige SERPs zugeschnitten. Zum Beispiel versagen diese, wenn Suchanfragen direkt auf der Suchergebnisseite beantwortet werden und der Nutzer nicht klicken muss. Basierend auf den Prinzipien des nutzerzentrierten Designs entwickeln wir eine Lösung in Form eines ganzheitlichen Ansatzes für die oben beschriebenen Probleme. Dieser Ansatz orientiert sich sowohl an Nutzern als auch an Entwicklern. Unsere Lösung stellt automatische Methoden bereit, um unternehmenszentriertem Design entgegenzuwirken und implizites Nutzerfeedback für die effizienteund effektive Evaluierung und Optimierung von Usability und insbesondere Ergebnisrelevanz nutzen zu können. Wir definieren Personas und Szenarien, aus denen wir ungelöste Probleme und konkrete Anforderungen ableiten. Basierend auf diesen Anforderungen entwickeln wir einen entsprechenden Werkzeugkasten, das Search Interaction Optimization Toolkit. Mittels eines Bottom-up-Ansatzes definieren wir zudem eine gleichnamige Methodik auf einem höheren Abstraktionsniveau. Das Search Interaction Optimization Toolkit besteht aus insgesamt sechs Komponenten. Zunächst präsentieren wir INUIT [1], ein neuartiges, minimales Instrument zur Bestimmung von Usability, welches speziell auf sinnvolle Korrelationen mit implizitem Nutzerfeedback in Form Client-seitiger Interaktionen abzielt. Aus diesem Grund dient es als Basis für die direkte Herleitung quantitativer Usability-Bewertungen aus dem Verhalten von Nutzern. Das Instrument wurde basierend auf Untersuchungen etablierter Usability-Standards und -Richtlinien sowie Experteninterviews entworfen. Die Machbarkeit und Effektivität der Benutzung von INUIT wurden in einer Nutzerstudie untersucht und darüber hinaus durch eine konfirmatorische Faktorenanalyse bestätigt. Im Anschluss beschreiben wir WaPPU [2], welches ein kontextsensitives, auf INUIT basierendes Tool zur Durchführung von A/B-Tests ist. Es implementiert das neuartige Konzept des Usability-based Split Testing und ermöglicht die automatische Evaluierung der Usability beliebiger SERPs basierend auf den bereits zuvor angesprochenen quantitativen Bewertungen, welche direkt aus Nutzerinteraktionen abgeleitet werden. Hierzu werden Techniken des maschinellen Lernens angewendet, um automatisch entsprechende Usability-Modelle generieren und anwenden zu können. WaPPU ist insbesondere nicht auf die Evaluierung von Suchergebnisseiten beschränkt, sondern kann auf jede beliebige Web-Schnittstelle in Form einer Webseite angewendet werden. Darauf aufbauend beschreiben wir S.O.S., die SERP Optimization Suite [3], welche das Tool WaPPU sowie einen neuartigen Katalog von „Best Practices“ [4] umfasst. Sobald eine durch WaPPU gemessene, suboptimale Usability-Bewertung festgestellt wird, werden – basierend auf dem Katalog von „Best Practices“ – automatisch entsprechende Gegenmaßnahmen und Optimierungen für die untersuchte Suchergebnisseite vorgeschlagen. Der Katalog wurde in einem dreistufigen Prozess erarbeitet, welcher die Untersuchung bestehender Suchergebnisseiten sowie eine Anpassung und Verifikation durch 20 Usability-Experten beinhaltete. Die bisher angesprochenen Tools fokussieren auf die generelle Usability von SERPs, jedoch ist insbesondere die Darstellung der für den Nutzer relevantesten Ergebnisse eminent wichtig für eine Suchmaschine. Da Relevanz eine Untermenge von Usability ist, beinhaltet unser Werkzeugkasten daher das Tool TellMyRelevance! (TMR) [5], die erste End-to-End-Lösung zur Vorhersage von Suchergebnisrelevanz basierend auf Client-seitigen Nutzerinteraktionen. TMR ist einvollautomatischer Ansatz, welcher die benötigten Daten auf dem Client abgreift, sie auf dem Server verarbeitet und entsprechende Relevanzmodelle bereitstellt. Die von diesen Modellen getroffenen Vorhersagen können wiederum in den Ranking-Prozess der Suchmaschine eingepflegt werden, was schlussendlich zu einer Verbesserung der Usability führt. StreamMyRelevance! (SMR) [6] erweitert das Konzept von TMR, indem es einen Streaming-basierten Ansatz bereitstellt. Hierbei geschieht die Sammlung und Verarbeitung der Daten sowie die Bereitstellung der Relevanzmodelle in Nahe-Echtzeit. Basierend auf umfangreichen Nutzerstudien mit echten Suchmaschinen haben wir den entwickelten Werkzeugkasten als Ganzes evaluiert, auch, um das Zusammenspiel der einzelnen Komponenten zu demonstrieren. S.O.S., WaPPU und INUIT wurden zur Evaluierung und Optimierung einer realen Suchergebnisseite herangezogen. Die Ergebnisse zeigen, dass unsere Tools in der Lage sind, auch kleine Abweichungen in der Usability korrekt zu identifizieren. Zudem haben die von S.O.S.vorgeschlagenen Optimierungen zu einer signifikanten Verbesserung der Usability der untersuchten und überarbeiteten Suchergebnisseite geführt. TMR und SMR wurden mit Datenmengen im zweistelligen Gigabyte-Bereich evaluiert, welche von zwei realen Hotelbuchungsportalen stammen. Beide zeigen das Potential, bessere Vorhersagen zu liefern als konkurrierende Systeme, welche lediglich Klicks auf Ergebnissen betrachten. SMR zeigt gegenüber allen anderen untersuchten Systemen zudem deutliche Vorteile bei Effizienz, Robustheit und Skalierbarkeit. Die Dissertation schließt mit einer Diskussion des Potentials und der Limitierungen der erarbeiteten Forschungsbeiträge und gibt einen Überblick über potentielle weiterführende und zukünftige Forschungsarbeiten.
179

An?lise do uso de peri?dicos cient?ficos na transi??o do meio impresso ao eletr?nico em disserta??es e teses: o impacto do portal de Peri?dicos/CAPES na produ??o do conhecimento / Analysis of using scientific journals to transition from printed media to electronic media in dissertations and thesis: the impact of journal Search Engines/ Capes onto knowledge production

Costa, Rubenildo Oliveira da 28 February 2007 (has links)
Made available in DSpace on 2016-04-04T18:36:30Z (GMT). No. of bitstreams: 1 Rubenildo Oliveira da Costa-1.pdf: 691427 bytes, checksum: dfb3f447eab738a5ff3508150f06c6eb (MD5) Previous issue date: 2007-02-28 / Universidade Estadual Paulista J?lio de Mesquita Filho / It verbalizes the influence of electronic communication (also of free access) and the commercial publishers control over scientific knowledge production. It starts from the point that the advantage of the advent of electronic media journal provides to scientific knowledge production, related to traditional support journal, is particularly related to acess speed and dinamization (time x space), not to contribute to paradigm changes within a domain. Concerning it has an exploratory, lining case study nature, it aims at analysing the use and the level of influence the electronic journals have been performing to scientific knowledge production within the University since the 1980 s, as well as presenting the level of influence of commercial publishers, by means of studying the Bradford s (80/20) Law reproductivity, in order to contribute to discussion/ reflection on forming and establishing indicators and control devices for managing title collections of electronic journals (Consortiums and Search Engines). Through collection and title and year analysis in dissertations and thesis from three decades (1980 s, 1990 s and 2000 s) of a certain domain, we have concluded that the hypothesis stated herein is valid. It seems that, due to the electronic communication phenomenon, researchers have been quoting more titles faster and faster. Therefore, the scientific progress seems to go forward quickly. As long as we have identified large influence from commercial publishers, we have also noticed the growth of free access to scientific information. / Versa sobre a influ?ncia da comunica??o eletr?nica (tamb?m de acesso livre) e o controle das editoras comerciais na produ??o do conhecimento cient?fico. A proposi??o ? de que a vantagem que o advento do peri?dico em meio eletr?nico proporciona para a produ??o do conhecimento cient?fico, relativamente ao peri?dico em suporte tradicional, est? particularmente relacionada ? velocidade e dinamiza??o do acesso (tempo x espa?o) e n?o a de contribuir para mudan?as paradigm?ticas do n?cleo de um dom?nio. De car?ter explorat?rio com delineamento de estudo de caso, visa-se analisar o uso e o grau de influ?ncia que os peri?dicos eletr?nicos v?m exercendo na produ??o de conhecimento cient?fico na Universidade desde a d?cada de 1980, como tamb?m apresentar o grau de influ?ncia das editoras comerciais, por meio do estudo da reprodutividade da lei de Bradford (80/20), a fim de contribuir para discuss?o / reflex?o sobre formula??o e estabelecimento de indicadores e dispositivos de controle para a gest?o das cole??es de t?tulos de peri?dicos eletr?nicos disponibilizadas pelos estoques de peri?dicos eletr?nicos (Cons?rcios e Portais). Por meio da coleta e an?lise dos t?tulos e anos de peri?dicos citados em disserta??es e teses de tr?s d?cadas (1980, 1990 e 2000) de um determinado dom?nio, conclui-se que a hip?tese colocada ? valida. Parece que, por conta do fen?meno da comunica??o eletr?nica, os pesquisadores est?o citando mais t?tulos e mais rapidamente. Com isso, o progresso cient?fico parece avan?ar velozmente. Ao passo que se identifica grande influ?ncia das editoras comerciais, percebe-se tamb?m um crescimento do movimento de acesso livre a informa??o cient?fica.
180

Semantiska webben och sökmotorer / Semantic web and search engines

Haj-Bolouri, Amir January 2010 (has links)
Den här semantiska webben. Syftet är att undersöka hur den semantiska webben påverkar sökmotorer på webben. Detta sker genom en undersökning av tio olika sökmotorer där nio är semantiskt sådana och den tionde är den mest använda sökmotorn idag. Studien är genomförd som både en deskriptiv och kvantitativ studie. En litteraturundersökning har också genomförts om den semantiska webben och sökmotorer. Slutsatserna av den här studien är att den semantiska webben är mångfacetterad med dess definitioner, och att resultatet kring hur konkreta sökmotorer tillämpar semantiska webbprinciper kan variera beroende vilken sökmotor man interagerar med.Nyckelord: Semantic web, Semantiska webben, Semantik, Informatik, Web 2.0, Internet, Search engines, Sökmotorerthat relates to the semantic web. Therapporten behandlar definitioner av begrepp som är kopplade till denDen här semantiska webben. Syftet är att undersöka hur den semantiska webben påverkar sökmotorer på webben. Detta sker genom en undersökning av tio olika sökmotorer där nio är semantiskt sådana och den tionde är den mest använda sökmotorn idag. Studien är genomförd som både en deskriptiv och kvantitativ studie. En litteraturundersökning har också genomförts om den semantiska webben och sökmotorer. Slutsatserna av den här studien är att den semantiska webben är mångfacetterad med dess definitioner, och att resultatet kring hur konkreta sökmotorer tillämpar semantiska webbprinciper kan variera beroende vilken sökmotor man interagerar med. / This report deals with the definitions and terms main purpose has been to investigate how the semantic web affects search engines on the web. This has been done through an investigation consisting of ten different search engines. Nine of these search engines are considering being semantic search engines, and the last one being the most used one on the web today. The study is conducted as a descriptive and quantitative study. A literature review has also been implemented by the relevant sources about the semantic web and search engines. The conclusions drawn where that the semantic web is multifaceted with its definitions and that the result of how concrete search engines implements semantic web principles can vary depending on which search engine one interacts with.

Page generated in 0.0572 seconds