• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Google Ads: Understanding millennials' search behavior on mobile devices

Claesson, Jennifer, Gedda, Henrik January 2018 (has links)
Purpose: The purpose of this study is to understand millennials search behavior on mobile devices. Research Questions: How do millennials value organic and sponsored search results on mobile devices? What are the Web advertising variables that affect millennials attitudes towards sponsored search ads on mobile devices? Methodology: Data was collected from 103 Swedish millennials through an experiment and survey. Conclusion: The findings of this research supports the variables of entertainment and incentives to have a positive association with millennials attitudes towards mobile search ads while irritation, informativeness and credibility were only partially supported when testing independently with attitudes. An overall negative attitude could be seen toward sponsored links when participants motivated their action to click. Moreover, the results illustrated a higher attitude value towards mobile search ads to reflect an increased click behavior on sponsored search results.
2

On-line marketing so zameraním na kampane prostredníctvom Google / Online marketing with focus on Google as a marketing tool

Bokaová, Katarína January 2012 (has links)
Internet advertising is becoming more and more important. Budgets of media planners are moving from TV and print to internet. Due to the young age of the Internet and especially online marketing, we can assume that this trend will grow stronger. This work addresses one of the biggest players on the Internet - Google and especially its tool for creating PPC ads. The first part concerns the most important aspects of the current internet marketing sphere and the second part is devoted to a specific topic of creating campaigns through Google Adwords and their subsequent success analysis by using already established metrics.
3

Fashioning the Future: Creating More Effectively Informed Clinicians via the Implementation of an Electronic Morning Report Search Results Form

Wallace, Rick L., Woodward, Nakia J. 14 November 2007 (has links)
No description available.
4

CONTEXT-BASED PUBLICATION SEARCH PARADIGM IN LITERATURE DIGITAL LIBRARIES

Ratprasartporn, Nattakarn January 2008 (has links)
No description available.
5

Improving the relevance of search results via search-term disambiguation and ontological filtering

Zhu, Dengya January 2007 (has links)
With the exponential growth of the Web and the inherent polysemy and synonymy problems of the natural languages, search engines are facing many challenges such as information overload, mismatch of search results, missing relevant documents, poorly organized search results, and mismatch of human mental model of clustering engines. To address these issues, much effort including employing different information retrieval (IR) models, information categorization/clustering, personalization, semantic Web, ontology-based IR, and so on, has been devoted to improve the relevance of search results. The major focus of this study is to dynamically re-organize Web search results under a socially constructed hierarchical knowledge structure, to facilitate information seekers to access and manipulate the retrieved search results, and consequently to improve the relevance of search results. / To achieve the above research goal, a special search-browser is developed, and its retrieval effectiveness is evaluated. The hierarchical structure of the Open Directory Project (ODP) is employed as the socially constructed knowledge structure which is represented by the Tree component of Java. Yahoo! Search Web Services API is utilized to obtain search results directly from Yahoo! search engine databases. The Lucene text search engine calculates similarities between each returned search result and the semantic characteristics of each category in the ODP; and thus to assign the search results to the corresponding ODP categories by Majority Voting algorithm. When an interesting category is selected by a user, only search results categorized under the category are presented to the user, and the quality of the search results is consequently improved. / Experiments demonstrate that the proposed approach of this research can improve the precision of Yahoo! search results at the 11 standard recall levels from an average 41.7 per cent to 65.2 per cent; the improvement is as high as 23.5 per cent. This conclusion is verified by comparing the improvements of the P@5 and P@10 of Yahoo! search results and the categorized search results of the special search-browser. The improvement of P@5 and P@10 are 38.3 per cent (85 per cent - 46.7 per cent) and 28 per cent (70 per cent - 42 per cent) respectively. The experiment of this research is well designed and controlled. To minimize the subjectiveness of relevance judgments, in this research five judges (experts) are asked to make their relevance judgments independently, and the final relevance judgment is a combination of the five judges’ judgments. The judges are presented with only search-terms, information needs, and the 50 search results of Yahoo! Search Web Service API. They are asked to make relevance judgments based on the information provided above, there is no categorization information provided. / The first contribution of this research is to use an extracted category-document to represent the semantic characteristics of each of the ODP categories. A category-document is composed of the topic of the category, description of the category, the titles and the brief descriptions of the submitted Web pages under this category. Experimental results demonstrate the category-documents of the ODP can represent the semantic characteristics of the ODP in most cases. Furthermore, for machine learning algorithms, the extracted category-documents can be utilized as training data which otherwise demand much human labor to create to ensure the learning algorithm to be properly trained. The second contribution of this research is the suggestion of the new concepts of relevance judgment convergent degree and relevance judgment divergent degree that are used to measure how well different judges agree with each other when they are asked to judge the relevance of a list of search results. When the relevance judgment convergent degree of a search-term is high, an IR algorithm should obtain a higher precision as well. On the other hand, if the relevance judgment convergent degree is low, or the relevance judgment divergent degree is high, it is arguable to use the data to evaluate the IR algorithm. This intuition is manifested by the experiment of this research. The last contribution of this research is that the developed search-browser is the first IR system (IRS) to utilize the ODP hierarchical structure to categorize and filter search results, to the best of my knowledge.
6

Improving search results with machine learning : Classifying multi-source data with supervised machine learning to improve search results

Stakovska, Meri January 2018 (has links)
Sony’s Support Application team wanted an experiment to be conducted by which they could determine if it was suitable to use Machine Learning to improve the quantity and quality of search results of the in-application search tool. By improving the quantity and quality of the results the team wanted to improve the customer’s journey. A supervised machine learning model was created to classify articles into four categories; Wi-Fi & Connectivity, Apps & Settings, System & Performance, andBattery Power & Charging. The same model was used to create a service that categorized the search terms into one of the four categories. The classified articles and the classified search terms were used to complement the existing search tool. The baseline for the experiment was the result of the search tool without classification. The results of the experiment show that the number of articles did indeed increase but due mainly to the broadness of the categories the search results held low quality.
7

Clustering the Web : Comparing Clustering Methods in Swedish / Webbklustring : En jämförelse av klustringsmetoder på svenska

Hinz, Joel January 2013 (has links)
Clustering -- automatically sorting -- web search results has been the focus of much attention but is by no means a solved problem, and there is little previous work in Swedish. This thesis studies the performance of three clustering algorithms -- k-means, agglomerative hierarchical clustering, and bisecting k-means -- on a total of 32 corpora, as well as whether clustering web search previews, called snippets, instead of full texts can achieve reasonably decent results. Four internal evaluation metrics are used to assess the data. Results indicate that k-means performs worse than the other two algorithms, and that snippets may be good enough to use in an actual product, although there is ample opportunity for further research on both issues; however, results are inconclusive regarding bisecting k-means vis-à-vis agglomerative hierarchical clustering. Stop word and stemmer usage results are not significant, and appear to not affect the clustering by any considerable magnitude.
8

[pt] EXPLORANDO INFORMAÇÕES BASEADAS EM ONTOLOGIA ATRAVÉS DA REVELAÇÃO PROGRESSIVA DE RESPOSTAS VISUAIS PARA CONSULTAS RELACIONADAS / [en] EXPLORING ONTOLOGY-BASED INFORMATION THROUGH THE PROGRESSIVE DISCLOSURE OF VISUAL ANSWER TO RELATED QUERIES

DALAI DOS SANTOS RIBEIRO 28 April 2020 (has links)
[pt] A busca na Web se tornou o método predominante para as pessoas suprirem suas necessidades de informação. Embora seja difundido, o modelo tradicional de páginas de resultados de pesquisa só é satisfatório se o usuário souber, com bastante precisão, como elaborar sua consulta para corresponder à busca das informações desejada. Propomos um novo modelo para páginas de resultados de pesquisa, que vai além de fornecer uma lista navegável de resultados em forma de visualizações, através da geração implícita de consultas relacionadas para expandir o espaço de busca, revelando progressivamente os resultados correspondentes. / [en] Web search has become the predominant method for people to fulfill their information needs. Although widespread, the traditional model for search result pages is only satisfactory if the user knows quite precisely how to phrase their query to match their intended information. We propose a new model for search page results, which goes beyond providing a navigable list of visualization search results, by implicitly generating related queries to expand the search space and progressively disclosing the corresponding results.
9

Visualisation des résultats de recherche classifiés en contexte de recherche d’information exploratoire : une évaluation d’utilisabilité

Crédeville, Aline 10 1900 (has links)
La recherche d’information exploratoire sur le Web présente des défis cognitifs en termes de stratégies cognitives et de tactiques de recherche. Le modèle « question-réponse » des moteurs de recherche actuels est inadéquat pour faciliter les stratégies de recherche d’information exploratoire, assimilables aux stratégies cognitives de l’apprentissage. La visualisation des résultats de recherche est un dispositif qui possède des propriétés graphiques et interactives pertinentes pour le traitement de l’information et l’utilisation de la mémoire et, plus largement de la cognition humaine. Plusieurs recherches ont été menées dans ce contexte de recherche d’information exploratoire, mais aucune n’a distinctement isolé le facteur graphique et interactif de la « visualisation » au sein de son évaluation. L’objectif principal de cette thèse est de vérifier si la visualisation des résultats en contexte de recherche d’information exploratoire témoigne des avantages cognitifs et interactifs pressentis selon ses présupposés théoriques. Pour décrire et déterminer la valeur ajoutée de la visualisation des résultats de recherche dans un contexte de recherche d’information exploratoire sur le Web, cette recherche propose de mesurer son utilisabilité. En la comparant selon les mêmes critères et indicateurs à une interface homologue textuelle, nous postulons que l’interface visuelle atteindra une efficacité, efficience et satisfaction supérieure à l’interface textuelle, dans un contexte de recherche d’information exploratoire. Les mesures objectives de l’efficacité et de l’efficience reposent principalement sur l’analyse des traces de l’interaction des utilisateurs, leur nombre et leur durée. Les mesures subjectives attestant de la satisfaction procurée par l’usage du système dans ce contexte repose sur la perception des utilisateurs par rapport à des critères de perception de la facilité d’utilisation et de l’utilité de l’interface testée et par rapport à des questions plus large sur l’expérience de recherche vécue. Un questionnaire et un entretien ont été passés auprès de chacun des vingt-trois répondants. Leur session de recherche a aussi été enregistré par un logiciel de capture vidéo d’écran. Sur les données des vingt-trois utilisateurs divisés en deux groupes, l’analyse statistique a révélé de faibles différences significatives entre les deux interfaces. Selon les mesures effectuées, l’interface textuelle s’est révélée plus efficace en terme de rappel et de pertinence ; et plus efficiente pour les durées de la recherche d’information. Sur le plan de la satisfaction, les interfaces ont été appréciées toutes deux posivitivement, ne permettant pas de les distinguer pour la grande majorité des métriques. Par contre, au niveau du comportement interactif, des différences notables ont montré que les utilisateurs de l’interface visuelle ont réalisé davantage d’interactions de type exploratoire, et ont procédé à une collecte sélective des résultats de recherche. L’analyse statistique et de contenu sur le critère de l’expérience vécue a permis de démontrer que la visualisation offre l’occasion à l’utilisateur de s’engager davantage dans le processus de recherche d’information en raison de l’impact positif de l’esthétique de l’interface visuelle. De plus, la fonctionnalité de classification a été perçue de manière ambivalente, divisant les candidats peu importe l’interface testée. Enfin, l’analyse des verbatims des « visuelle » a permis d’identifier le besoin de fonctionnalités de rétroaction de l’utilisateur afin de pouvoir communiquer le besoin d’information ou sa pondération des résultats ou des classes, grâce à des modalités interactives de manipulation directe des classes sur un espace graphique. / Conducting exploratory searches on the web presents a number of cognitive difficulties as regards search strategies and tactics. The “question-response” model used by the available search engines does not respond adequately to exploratory searches, which are akin to cognitive learning strategies. Visualising search results involves graphic and interactive properties for presenting information that are pertinent for processing and using information, as well as for remembering and, more broadly, for human cognition. Many studies have been conducted in the area of exploratory searches, but none have focussed specifically on the graphic and interactive features of visualisation in their analysis. The principal objective of this thesis is to confirm whether the visualisation of results in the context of exploratory searches offers the cognitive and interactive advantages predicted by conjectural theory. In order to describe and to determine the added value of visualising search results in the context of exploratory web searches, the study proposes to measure its usability. By comparing it to a parallel text interface, using the same criteria and indicators, the likelihood of better efficiency, efficacy, and satisfaction when using a visual interface can be established. The objective measures of efficiency and efficacy are based mainly on the analysis of user interactions, including the number of these interactions and the time they take. Subjective measures of satisfaction in using the system in this context are based on user perception regarding ease of use and the usefulness of the interface tested, and on broader questions concerning the experience of using the search interface. These data were obtained using a questionnaire and a discussion with each participant. Statistical analysis of the data from twenty-three participants divided into two groups showed slightly significant differences between the two interfaces. Analysis of the metrics used showed that the textual interface is more efficient in terms of recall and pertinence, and more efficacious concerning the time needed to search for information. Regarding user satisfaction, both interfaces were seen positively, so that no differences emerged for the great majority of metrics used. However, as regards interactive behaviour, notable differences emerged. Participants using the visual interface had more exploratory interaction, and went on to select and collect pertinent search results. Statistical and content analysis of the experience itself showed that visualisation invites the user to become more involved in the search process, because of the positive effect of a pleasing visual interface. In addition, the classification function was perceived as ambivalent, dividing the participants no matter which interface was used. Finally, analysis of the verbatim reports of participants classed as “visual” indicated the need for a user feedback mechanism in order to communicate information needs or for weighting results or classes, using the interactive function for manipulating classes within a geographic space.
10

Search Interaction Optimization / Search Interaction Optimization : Ein nutzerzentrierter Design-Ansatz

Speicher, Maximilian 20 September 2016 (has links) (PDF)
Over the past 25 years, search engines have become one of the most important, if not the entry point of the World Wide Web. This development has been primarily due to the continuously increasing amount of available documents, which are highly unstructured. Moreover, the general trend is towards classifying search results into categories and presenting them in terms of semantic information that answer users' queries without having to leave the search engine. With the growing amount of documents and technological enhancements, the needs of users as well as search engines are continuously evolving. Users want to be presented with increasingly sophisticated results and interfaces while companies have to place advertisements and make revenue to be able to offer their services for free. To address the above needs, it is more and more important to provide highly usable and optimized search engine results pages (SERPs). Yet, existing approaches to usability evaluation are often costly or time-consuming and mostly rely on explicit feedback. They are either not efficient or not effective while SERP interfaces are commonly optimized primarily from a company's point of view. Moreover, existing approaches to predicting search result relevance, which are mostly based on clicks, are not tailored to the evolving kinds of SERPs. For instance, they fail if queries are answered directly on a SERP and no clicks need to happen. Applying Human-Centered Design principles, we propose a solution to the above in terms of a holistic approach that intends to satisfy both, searchers and developers. It provides novel means to counteract exclusively company-centric design and to make use of implicit user feedback for efficient and effective evaluation and optimization of usability and, in particular, relevance. We define personas and scenarios from which we infer unsolved problems and a set of well-defined requirements. Based on these requirements, we design and develop the Search Interaction Optimization toolkit. Using a bottom-up approach, we moreover define an eponymous, higher-level methodology. The Search Interaction Optimization toolkit comprises a total of six components. We start with INUIT [1], which is a novel minimal usability instrument specifically aiming at meaningful correlations with implicit user feedback in terms of client-side interactions. Hence, it serves as a basis for deriving usability scores directly from user behavior. INUIT has been designed based on reviews of established usability standards and guidelines as well as interviews with nine dedicated usability experts. Its feasibility and effectiveness have been investigated in a user study. Also, a confirmatory factor analysis shows that the instrument can reasonably well describe real-world perceptions of usability. Subsequently, we introduce WaPPU [2], which is a context-aware A/B testing tool based on INUIT. WaPPU implements the novel concept of Usability-based Split Testing and enables automatic usability evaluation of arbitrary SERP interfaces based on a quantitative score that is derived directly from user interactions. For this, usability models are automatically trained and applied based on machine learning techniques. In particular, the tool is not restricted to evaluating SERPs, but can be used with any web interface. Building on the above, we introduce S.O.S., the SERP Optimization Suite [3], which comprises WaPPU as well as a catalog of best practices [4]. Once it has been detected that an investigated SERP's usability is suboptimal based on scores delivered by WaPPU, corresponding optimizations are automatically proposed based on the catalog of best practices. This catalog has been compiled in a three-step process involving reviews of existing SERP interfaces and contributions by 20 dedicated usability experts. While the above focus on the general usability of SERPs, presenting the most relevant results is specifically important for search engines. Hence, our toolkit contains TellMyRelevance! (TMR) [5] — the first end-to-end pipeline for predicting search result relevance based on users’ interactions beyond clicks. TMR is a fully automatic approach that collects necessary information on the client, processes it on the server side and trains corresponding relevance models based on machine learning techniques. Predictions made by these models can then be fed back into the ranking process of the search engine, which improves result quality and hence also usability. StreamMyRelevance! (SMR) [6] takes the concept of TMR one step further by providing a streaming-based version. That is, SMR collects and processes interaction data and trains relevance models in near real-time. Based on a user study and large-scale log analysis involving real-world search engines, we have evaluated the components of the Search Interaction Optimization toolkit as a whole—also to demonstrate the interplay of the different components. S.O.S., WaPPU and INUIT have been engaged in the evaluation and optimization of a real-world SERP interface. Results show that our tools are able to correctly identify even subtle differences in usability. Moreover, optimizations proposed by S.O.S. significantly improved the usability of the investigated and redesigned SERP. TMR and SMR have been evaluated in a GB-scale interaction log analysis as well using data from real-world search engines. Our findings indicate that they are able to yield predictions that are better than those of competing state-of-the-art systems considering clicks only. Also, a comparison of SMR to existing solutions shows its superiority in terms of efficiency, robustness and scalability. The thesis concludes with a discussion of the potential and limitations of the above contributions and provides an overview of potential future work. / Im Laufe der vergangenen 25 Jahre haben sich Suchmaschinen zu einem der wichtigsten, wenn nicht gar dem wichtigsten Zugangspunkt zum World Wide Web (WWW) entwickelt. Diese Entwicklung resultiert vor allem aus der kontinuierlich steigenden Zahl an Dokumenten, welche im WWW verfügbar, jedoch sehr unstrukturiert organisiert sind. Überdies werden Suchergebnisse immer häufiger in Kategorien klassifiziert und in Form semantischer Informationen bereitgestellt, die direkt in der Suchmaschine konsumiert werden können. Dies spiegelt einen allgemeinen Trend wider. Durch die wachsende Zahl an Dokumenten und technologischen Neuerungen wandeln sich die Bedürfnisse von sowohl Nutzern als auch Suchmaschinen ständig. Nutzer wollen mit immer besseren Suchergebnissen und Interfaces versorgt werden, während Suchmaschinen-Unternehmen Werbung platzieren und Gewinn machen müssen, um ihre Dienste kostenlos anbieten zu können. Damit geht die Notwendigkeit einher, in hohem Maße benutzbare und optimierte Suchergebnisseiten – sogenannte SERPs (search engine results pages) – für Nutzer bereitzustellen. Gängige Methoden zur Evaluierung und Optimierung von Usability sind jedoch größtenteils kostspielig oder zeitaufwändig und basieren meist auf explizitem Feedback. Sie sind somit entweder nicht effizient oder nicht effektiv, weshalb Optimierungen an Suchmaschinen-Schnittstellen häufig primär aus dem Unternehmensblickwinkel heraus durchgeführt werden. Des Weiteren sind bestehende Methoden zur Vorhersage der Relevanz von Suchergebnissen, welche größtenteils auf der Auswertung von Klicks basieren, nicht auf neuartige SERPs zugeschnitten. Zum Beispiel versagen diese, wenn Suchanfragen direkt auf der Suchergebnisseite beantwortet werden und der Nutzer nicht klicken muss. Basierend auf den Prinzipien des nutzerzentrierten Designs entwickeln wir eine Lösung in Form eines ganzheitlichen Ansatzes für die oben beschriebenen Probleme. Dieser Ansatz orientiert sich sowohl an Nutzern als auch an Entwicklern. Unsere Lösung stellt automatische Methoden bereit, um unternehmenszentriertem Design entgegenzuwirken und implizites Nutzerfeedback für die effizienteund effektive Evaluierung und Optimierung von Usability und insbesondere Ergebnisrelevanz nutzen zu können. Wir definieren Personas und Szenarien, aus denen wir ungelöste Probleme und konkrete Anforderungen ableiten. Basierend auf diesen Anforderungen entwickeln wir einen entsprechenden Werkzeugkasten, das Search Interaction Optimization Toolkit. Mittels eines Bottom-up-Ansatzes definieren wir zudem eine gleichnamige Methodik auf einem höheren Abstraktionsniveau. Das Search Interaction Optimization Toolkit besteht aus insgesamt sechs Komponenten. Zunächst präsentieren wir INUIT [1], ein neuartiges, minimales Instrument zur Bestimmung von Usability, welches speziell auf sinnvolle Korrelationen mit implizitem Nutzerfeedback in Form Client-seitiger Interaktionen abzielt. Aus diesem Grund dient es als Basis für die direkte Herleitung quantitativer Usability-Bewertungen aus dem Verhalten von Nutzern. Das Instrument wurde basierend auf Untersuchungen etablierter Usability-Standards und -Richtlinien sowie Experteninterviews entworfen. Die Machbarkeit und Effektivität der Benutzung von INUIT wurden in einer Nutzerstudie untersucht und darüber hinaus durch eine konfirmatorische Faktorenanalyse bestätigt. Im Anschluss beschreiben wir WaPPU [2], welches ein kontextsensitives, auf INUIT basierendes Tool zur Durchführung von A/B-Tests ist. Es implementiert das neuartige Konzept des Usability-based Split Testing und ermöglicht die automatische Evaluierung der Usability beliebiger SERPs basierend auf den bereits zuvor angesprochenen quantitativen Bewertungen, welche direkt aus Nutzerinteraktionen abgeleitet werden. Hierzu werden Techniken des maschinellen Lernens angewendet, um automatisch entsprechende Usability-Modelle generieren und anwenden zu können. WaPPU ist insbesondere nicht auf die Evaluierung von Suchergebnisseiten beschränkt, sondern kann auf jede beliebige Web-Schnittstelle in Form einer Webseite angewendet werden. Darauf aufbauend beschreiben wir S.O.S., die SERP Optimization Suite [3], welche das Tool WaPPU sowie einen neuartigen Katalog von „Best Practices“ [4] umfasst. Sobald eine durch WaPPU gemessene, suboptimale Usability-Bewertung festgestellt wird, werden – basierend auf dem Katalog von „Best Practices“ – automatisch entsprechende Gegenmaßnahmen und Optimierungen für die untersuchte Suchergebnisseite vorgeschlagen. Der Katalog wurde in einem dreistufigen Prozess erarbeitet, welcher die Untersuchung bestehender Suchergebnisseiten sowie eine Anpassung und Verifikation durch 20 Usability-Experten beinhaltete. Die bisher angesprochenen Tools fokussieren auf die generelle Usability von SERPs, jedoch ist insbesondere die Darstellung der für den Nutzer relevantesten Ergebnisse eminent wichtig für eine Suchmaschine. Da Relevanz eine Untermenge von Usability ist, beinhaltet unser Werkzeugkasten daher das Tool TellMyRelevance! (TMR) [5], die erste End-to-End-Lösung zur Vorhersage von Suchergebnisrelevanz basierend auf Client-seitigen Nutzerinteraktionen. TMR ist einvollautomatischer Ansatz, welcher die benötigten Daten auf dem Client abgreift, sie auf dem Server verarbeitet und entsprechende Relevanzmodelle bereitstellt. Die von diesen Modellen getroffenen Vorhersagen können wiederum in den Ranking-Prozess der Suchmaschine eingepflegt werden, was schlussendlich zu einer Verbesserung der Usability führt. StreamMyRelevance! (SMR) [6] erweitert das Konzept von TMR, indem es einen Streaming-basierten Ansatz bereitstellt. Hierbei geschieht die Sammlung und Verarbeitung der Daten sowie die Bereitstellung der Relevanzmodelle in Nahe-Echtzeit. Basierend auf umfangreichen Nutzerstudien mit echten Suchmaschinen haben wir den entwickelten Werkzeugkasten als Ganzes evaluiert, auch, um das Zusammenspiel der einzelnen Komponenten zu demonstrieren. S.O.S., WaPPU und INUIT wurden zur Evaluierung und Optimierung einer realen Suchergebnisseite herangezogen. Die Ergebnisse zeigen, dass unsere Tools in der Lage sind, auch kleine Abweichungen in der Usability korrekt zu identifizieren. Zudem haben die von S.O.S.vorgeschlagenen Optimierungen zu einer signifikanten Verbesserung der Usability der untersuchten und überarbeiteten Suchergebnisseite geführt. TMR und SMR wurden mit Datenmengen im zweistelligen Gigabyte-Bereich evaluiert, welche von zwei realen Hotelbuchungsportalen stammen. Beide zeigen das Potential, bessere Vorhersagen zu liefern als konkurrierende Systeme, welche lediglich Klicks auf Ergebnissen betrachten. SMR zeigt gegenüber allen anderen untersuchten Systemen zudem deutliche Vorteile bei Effizienz, Robustheit und Skalierbarkeit. Die Dissertation schließt mit einer Diskussion des Potentials und der Limitierungen der erarbeiteten Forschungsbeiträge und gibt einen Überblick über potentielle weiterführende und zukünftige Forschungsarbeiten.

Page generated in 0.0487 seconds