Spelling suggestions: "subject:"searchengine"" "subject:"gearchanging""
101 |
Changing a user’s search experience byincorporating preferences of metadata / Andra en användares sökupplevelse genom att inkorporera metadatapreferenserAli, Miran January 2014 (has links)
Implicit feedback is usually data that comes from users’ clicks, search queries and text highlights. It exists in abun- dance, but it is riddled with much noise and requires advanced algorithms to properly make good use of it. Several findings suggest that factors such as click-through data and reading time could be used to create user behaviour models in order to predict the users’ information need. This Master’s thesis aims to use click-through data and search queries together with heuristics to create a model that prioritises metadata-fields of the documents in order to predict the information need of a user. Simply put, implicit feedback will be used to improve the precision of a search engine. The Master’s thesis was carried out at Findwise AB - a search engine consultancy firm. Documents from the benchmark dataset INEX were indexed into a search engine. Two different heuristics were proposed that increment the priority of different metadata-fields based on the users’ search queries and clicks. It was assumed that the heuristics would be able to change the listing order of the search results. Evaluations were carried out for the two heuristics and the unmodified search engine was used as the baseline for the experiment. The evaluations were based on simulating a user that searches queries and clicks on documents. The queries and documents, with manually tagged relevance, used in the evaluation came from a data set given by INEX. It was expected that listing order would change in a way that was favourable for the user; the top-ranking results would be documents that truly were in the interest of the user. The evaluations revealed that the behaviour of the heuristics and the baseline have erratic behaviours and metrics never converged to any specific mean-relevance. A statistical test revealed that there is no difference in accuracy between the heuristics and the baseline. These results mean that the proposed heuristics do not improve the precision of the search engine and several factors, such as the indexing of too redundant metadata, could have been responsible for this outcome. / Implicit feedback är oftast data som kommer från användarnas klick, sökfrågor och textmarkeringar. Denna data finns i överflöd, men har för mycket brus och kräver avancerade algoritmer för att man ska kunna dra nytta av den. Flera rön föreslår att faktorer som klickdata och läsningstid kan användas för att skapa beteendemodeller för att förutse användarens informationsbehov. Detta examensarbete ämnar att använda klickdata och sökfrågor tillsammans med heuristiker för att skapa en modell som prioriterar metadata-fält i dokument så att användarens informationsbehov kan förutses. Alltså ska implicit feedback användas för att förbättra en sökmotors precision. Examensarbetet utfördes hos Findwise AB - en konsultfirma som specialiserar sig på söklösningar. Dokument från utvärderingsdatamängden INEX indexerades i en sökmotor. Två olika heuristiker skapades för att ändra prioriteten av metadata-fälten utifrån användarnas sök- och klickdata. Det antogs att heuristikerna skulle kunna förändra ordningen av sökresultaten. Evalueringar utfördes för båda heuristiker och den omodifierade sökmotorn användes som måttstock för experimentet. Evalueringarna gick ut på att simulera en användare som söker på frågor och klickar på dokument. Dessa frågor och dokument, med manuellt taggad relevansdata, kom från en datamängd som tillhandahölls av INEX. Evalueringarna visade att beteendet av heuristikerna och måttstocket är slumpmässiga och oberäkneliga. Ingen av heuristikerna konvergerar mot någon specifik medelrelevans. Ett statistiskt test visar att det inte är någon signifikant skillnad på uppmätt träffsäkerhet mellan heuristikerna och måttstocket. Dessa resultat innebär att heuristikerna inte förbättrar sökmotorns precision. Detta utfall kan bero på flera faktorer som t.ex. indexering av överflödig meta-data.
|
102 |
Supporting Scientific Collaboration through Workflows and ProvenanceEllqvist, Tommy January 2010 (has links)
Science is changing. Computers, fast communication, and new technologies have created new ways of conducting research. For instance, researchers from different disciplines are processing and analyzing scientific data that is increasing at an exponential rate. This kind of research requires that the scientists have access to tools that can handle huge amounts of data, enable access to vast computational resources, and support the collaboration of large teams of scientists. This thesis focuses on tools that help support scientific collaboration. Workflows and provenance are two concepts that have proven useful in supporting scientific collaboration. Workflows provide a formal specification of scientific experiments, and provenance offers a model for documenting data and process dependencies. Together, they enable the creation of tools that can support collaboration through the whole scientific life-cycle, from specification of experiments to validation of results. However, existing models for workflows and provenance are often specific to particular tasks and tools. This makes it hard to analyze the history of data that has been generated over several application areas by different tools. Moreover, workflow design is a time-consuming process and often requires extensive knowledge of the tools involved and collaboration with researchers with different expertise. This thesis addresses these problems. Our first contribution is a study of the differences between two approaches to interoperability between provenance models: direct data conversion, and mediation. We perform a case study where we integrate three different provenance models using the mediation approach, and show the advantages compared to data conversion. Our second contribution serves to support workflow design by allowing multiple users to concurrently design workflows. Current workflow tools lack the ability for users to work simultaneously on the same workflow. We propose a method that uses the provenance of workflow evolution to enable real-time collaborative design of workflows. Our third contribution considers supporting workflow design by reusing existing workflows. Workflow collections for reuse are available, but more efficient methods for generating summaries of search results are still needed. We explore new summarization strategies that considers the workflow structure. <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAADsElEQVR4nK2VTW9VVRSGn33OPgWpYLARbKWhQlCHTogoSkjEkQwclEQcNJEwlfgD/AM6NBo1xjhx5LyJ0cYEDHGkJqhtBGKUpm3SFii3vb2956wPB/t+9raEgSs52fuus89613rftdcNH8/c9q9++oe/Vzb5P+3McyNcfm2CcPj9af9w6gwjTwzvethx3Bx3x8xwd1wNM8dMcTNUHTfFLPnX6nVmZpeIYwf3cWD/PhbrvlPkblAzVFurKS6GmmGqqComaS+qmBoTI0Ncu3mXuGvWnrJ+ZSxweDgnkHf8ndVTdbiT3M7cQp2Z31dRTecHAfqydp4ejhwazh6Zezfnu98E1WIQwB3crEuJ2Y45PBTAQUVR9X4At66AppoEVO1Q8sgAOKJJjw6Am6OquDmvHskZ3R87gW+vlHz98zpmiqphkkRVbQtsfPTOC30lJKFbFTgp83bWh7Zx/uX1B6w3hI3NkkZTqEpBRDBRzG2AQHcwcYwEkOGkTERREbLQ/8HxJwuW7zdYrzfZ2iopy4qqEspKaDYravVm33k1R91Q69FA1VBRzFIVvXbx5AgXT44A8MWP81yfu0utIR2aVK3vfCnGrcUNxp8a7gKYKiLCvY2SUvo/aNtnM3e49ucK9S3p0aDdaT0UAVsKi2tVi6IWwNL9JvdqTdihaz79/l+u/rHMxmaJVMLkS2OoKKLWacdeE3IsSxctc2D5Qcl6vUlVVgNt+fkPPcFFmTw1xruvT7SCd7nuVhDQvECzJH90h0azRKoKFRkAmP5lKTWAGRdefoZL554FQNUxB92WvYeA5UN4PtSqwB2phKqsqMpBgAunRhFR3j49zuU3jnX8k6fHEQKXzh1jbmGDuYU6s4t1rt6socUeLLZHhYO2AHSHmzt19ihTZ48O8Hzl/AmunD/BjTvrvPfNX3hWsNpwJCvwYm+ngug4UilSCSq6k8YPtxDwfA+WRawIWFbgscDiULcCEaWqBFOlrLazurupOSHLqGnEKJAY8TwBEHumqUirAjNm52vEPPRV4p01XXMPAQhUBjcWm9QZwijwokgAeYHlHYA06KR1cT6ZvoV56pDUJQEjw0KeaMgj1hPEY4vz2A4eW0/e1qA7KtQdsxTYAG0H3iG4xyK1Y+xm7XmEPOJZDiENzLi2WZHngeOjj2Pe+sMg4GRYyLAsx7ME4FnsyTD9pr0PEc8zPGRAwKXBkYOPEd96cZRvf11g9MDe7e3R4Z4Q+vyEnn3P4t0XzK/W+ODN5/kPfRLewAJVEQ0AAAAASUVORK5CYII%3D" />
|
103 |
Optimization for search engines based on external revision databaseWesterdahl, Simon, Lemón Larsson, Fredrik January 2020 (has links)
The amount of data is continually growing and the ability to efficiently search through vast amounts of data is almost always sought after. To efficiently find data in a set there exist many technologies and methods but all of them cost in the form of resources like cpu-cycles, memory and storage. In this study a search engine (SE) is optimized using several methods and techniques. Thesis looks into how to optimize a SE that is based on an external revision database.The optimized implementation is compared to a non-optimized implementation when executing a query. An artificial neural network (ANN) trained on a dataset containing 3 years normal usage at a company is used to prioritize within the resultset before returning the result to the caller. The new indexing algorithms have improved the document space complexity by removing all duplicate documents that add no value. Machine learning (ML) has been used to analyze the user behaviour to reduce the necessary amount of documents that gets retrieved by a query.
|
104 |
AI adaption in digital marketing : An investigation on marketers’ expectations from AI, and the applicable knowledge on search engine marketing. / AI-anpassning inom digital marknadsföring : En undersökning av marknadsförares förväntningar på AI och tillämplig kunskap om marknadsföring via sökmotorer.Iskef, George January 2021 (has links)
Abstract The following research conceptualizes the applicability of AI in search engine marketing. Following a rigorous investigation on professionals’ opinions, the investigation regards their perceptual understanding of the applicable AI practices and the required level of interrelated knowledge to technical applicability for a more successful marketing strategy. Purpose Digitalization has changed many industrial sectors towards becoming technologically dependent. These technological advancements are somewhat visible by professionals and applicable in their industry. The thesis examines AI’s applicability in search engine marketing and the understanding of professionals in its handling. Based on research that identifies marketers’ misperceptions of AI to be ineffective in marketing, and other research that finds technical knowledge in AI as connected to successful marketing management. This investigation intends to examine marketers’ expectations from AI, and compare that to what AI professionals think that should be expected. Thus, with the knowledge acquired through the interviews and theory review, this research intends to summarize marketers’ expectations from AI in search engine marketing, and the AI knowledge that should be expected from a marketer. Method The research methods are qualitative, applying practices such as online interviews with open-end questions and template analysis to classify the data collection. On the first level, this research distinguished between marketers’ expectations from AI and current AI capabilities. In advance, focus on Google AI in search engine marketing, to investigate a specific search engine. Finally, establishing an expectancy of marketers’ knowledge in AI, relational to search engine marketing. Findings While technical understanding in AI infrastructures may increase problem-solving capabilities in search engine marketing, technical proficiency in AI is not among the primary contributors for successful marketing. The research findings show a different result from the expected outcome on knowledge requirements. And while marketers hold high expectations from the automation of search engine marketing, however, they are certain of the irreplaceable human contribution in creating abstract and strategic development. Keywords Artificial Intelligence (AI), Search engine marketing (SEM), Knowledge management / <p>Due to the pandemic, the thesis work was presented online via zoom meetings by all students. </p>
|
105 |
SEO - optimalizace pro vyhledávače / SEO - Search Engine OptimizationŠtefl, Jan January 2008 (has links)
Visiblity of page assume that is situated at top positions in search engines results for particular keywords. Search engine optimization is collection of rules, that should pass every page. In this thesis is described all essences of this technique, including methodology, that helps create optimized pages.
|
106 |
Metody optimalizace webových vyhledávačů - SEO a SEM / Methods for Optimization of Web Spotters - SEO and SEMBartek, Tomáš January 2007 (has links)
This work concerns optimalization of web pages for finders in the way that the web pages could be placed on best positions. The key of success for optimalization of web pages is the combination of some basic rules. The comparison of advantages of different ways of navigation and creation of menu, speed optimalization of pages loading, texts optimalization with the help of chosen key words, principles of choosing the key words, item placing on web pages and comparison of design, marketing and custom view of making the web pages. First of all, we will look on the difference between catalogues and full text finders, their historical development and current ratio of finders on our market. Subsequently we will describe the presumptions for optimalization from the resource code and programming languages point of view, which are used on web pages. The most important part of our interest is the optimalization methods of web pages content and also the methods which are considered as forbidden. The final implementation is made in PHP language.
|
107 |
Basic system configuration in search engineWatson, Veronica January 2008 (has links)
No description available.
|
108 |
THREE ESSAYS ON THE IMPACT OF FIRMS’ DIGITAL COMMUNICATION STRATEGIES ON ONLINE CONSUMER BEHAVIORBhattacharya, Siddharth, 0000-0001-9542-927X January 2021 (has links)
In my dissertation, I study the strategic interplay among firm’s online communication, firm’s digital strategies and its impact on consumer decision making. I identify important strategies that firms can adopt while targeting consumers on search engine platforms, such as Google and Bing. For technology-firms interested in providing information cues to consumers, online advertising serves as an important tool to nudge consumers decision making. Through the use of diverse methodologies, including empirical, analytical, and behavioral, I attempt to answer important questions in this research space. Moreover, I investigate how firm strategies are affected by factors such as heterogeneity of consumer preferences, product quality, and competition. The research spans across disciplines, and makes contributions to Information Systems, Operations Management and Marketing. In essay 1 I investigate the novel context of “competitive poaching”, a phenomenon where firms can generate traffic from search advertising by bidding on competitors’ keywords. In this research I examine the factors that influence the effectiveness of competitive poaching, specifically the role of different ad copies and the type of competitor (poached brand) from which a brand is “poaching. ”I also examine how the presence of sponsored ads from the poached brand and its physical location affect competitive poaching. In Essay 2, I investigate a similar context but here instead of only competing against each other, firms are simultaneously competing and cooperating with each other while advertising on the search engine. Thus, we have a novel context where a firm and its third-party referral partner (often referred to as “Infomediaries”) compete and cooperate while advertising simultaneously on the search engine. In this context, how equilibrium payment and advertising strategies are affected by factors such as traffic quality, advertising effectiveness, leakage, and the nature of contract between the two firms, remains an open question. Using a game-theoretic model, I show that the novel balance between the competitive and the collaborative nature of the interaction, which itself gets affected by the choice of contract and changes in the environmental factors, alters equilibrium strategies commonly expected in existing literature. In my third essay, I study the novel yet increasingly common phenomenon of “multiscreen viewing”, a phenomenon where consumers are increasingly using additional devices (like smartphones or tablets) while watching TV. This provides an additional advertising channel for marketers, specifically the second screen. However, this is not without its complexities; as marketers must optimally time advertisements on the second screen conditional on multiple factors including consumers’ engagement level on the primary screen, consumers’ engagement level on the second screen, and the psychological involvement with the content on the primary screen. Administering multiple behavioral experiments, I investigate how factors such as users’ engagement with the primary screen (e.g., TV), users’ engagement with a second screen (e.g., Mobile), timing of the advertisement, and message congruence, affect second screen usage and ad recall. Theoretical and managerial contributions of each of these essays are discussed. / Business Administration/Management Information Systems
|
109 |
Large Scale Image Retrieval From BooksZhao, Mao 01 January 2012 (has links) (PDF)
Search engines play a very important role in daily life. As multimedia product becomes more and more popular, people have developed search engines for images and videos. In the first part of this thesis, I propose a prototype of a book image search engine. I discuss tag representation for the book images, as well as the way to apply the probabilistic model to generate image tags. Then I propose the random walk refinement method using tag similarity graph. The image search system is built on the Galago search engine developed in UMASS CIIR lab.
Consider the large amount of data the search engines need to process, I bring in cloud environment for the large-scale distributed computing in the second part of this thesis. I discuss two models, one is the MapReduce model, which is currently one of the most popular technologies in the IT industry, and the other one is the Maiter model. The asynchronous accumulative update mechanism of Maiter model is a great fit for the random walk refinement process, which takes up 84% of the entire run time, and it accelerates the refinement process by 46 times.
|
110 |
Grouping Search-Engine Returned Citations for Person-Name QueriesAl-Kamha, Reema 06 July 2004 (has links) (PDF)
In this thesis we present a technique to group search-engine returned citations for person-name queries, such that the search-engine returned citations in each group belong to the same person. To group the returned citations we use a multi-faceted approach that considers evidence from three facets: (1) attributes, (2) links, and (3) page similarity. For each facet we generate a confidence matrix. Then we construct a final confidence matrix for all facets. Using a threshold, we apply a grouping algorithm on the final confidence matrix. The output is a group of search-engine returned citations, such that the citations in each group relate to the same person.
|
Page generated in 0.0841 seconds