• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 95
  • 47
  • 36
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 245
  • 245
  • 77
  • 66
  • 58
  • 57
  • 51
  • 51
  • 42
  • 36
  • 32
  • 31
  • 29
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sökmotoroptimering : Metoder för att förbättra sin placering i Googles sökresultat

Allard, Sebastian, Nilsson, Björn January 2010 (has links)
<p>This paper is a literature study on search engine optimization (SEO) considering the leader of the search engine market: Google. There´s an introductory background description of Google and its methods of  crawling the Internet and indexing the web pages, along with a brief review of the famous PageRank algorithm. The purpose of this paper is to describe the major important methods for improved rankings on Google´s result lists. These methods could be categorized as on-page methods tied to the website to be optimized or off-page methods that are external to the website such as link development. Furthermore the most common unethical methods are described, known as “black hat”, which is the secondary purpose of the text. The discussion that follows concerns the practical implications of SEO and personal reflections of the matter. Finally  there´s a quick view of the expanding market of handheld devices connected to the Internet and mobile search as an initial area of research.</p> / <p>Denna uppsats är en litteraturstudie om ämnet sökmotoroptimering (SEO) som behandlar ledaren bland sökmotorer: Google. Det finns en introducerande bakgrund som beskriver Google och dess metoder för ”crawling” och indexering av webbplatser, tillsammans med en kort genomgång av den  välkända  PageRank-algoritmen. Syftet med denna uppsats är att beskriva de centrala metoderna för förbättrad rankning i Googles träffresultat. Dessa metoder kan kategoriseras som ”on-page”-metoder, som är knutna till webbplatsen, eller ”off-page”-metoder, som är externa, exempelvis  länkförvärvning. Vidare kommer de vanligaste oetiska metoderna att beskrivas, kända som ”black hat”, som är det andra syftet med denna text. Diskussionen som följer behandlar de praktiska implikationerna av SEO och personliga reflektioner i frågan. Avslutningsvis  berör vi  den expanderade marknaden av handhållen utrustning med Internetuppkoppling och mobil sökning som ett kommande forskningsområde.</p>
22

A Web-based Question Answering System

Zhang, Dell, Lee, Wee Sun 01 1900 (has links)
The Web is apparently an ideal source of answers to a large variety of questions, due to the tremendous amount of information available online. This paper describes a Web-based question answering system LAMP, which is publicly accessible. A particular characteristic of this system is that it only takes advantage of the snippets in the search results returned by a search engine like Google. We think such “snippet-tolerant” property is important for an online question answering system to be practical, because it is time-consuming to download and analyze the original web documents. The performance of LAMP is comparable to the best state-of-the-art question answering systems. / Singapore-MIT Alliance (SMA)
23

Model for Auditing Search Engine Optimization for E-business

Schooner, Patrick January 2010 (has links)
E-commerce combines web technology with business economics. As of the last 10 years, online visibility for such online enterprises now heavily rely on the relationship between the own online sales platform and Search Engines for improved traffic consisting of presumable customers with the intent of acquiring products or services related to the customers’ needs. In 2008 an Internet behavioural analysis showed that over 90% percent of Swedish internet users make use of search engines at least once a week, stating that online visibility through the use of search engines now is a crucial business marketing aspect. To improve the relationship between online e-commercial platforms and search engines several applications exists within the technical field of Online Marketing – one being Search Engine Optimization (SEO), As a subset of Online Marketing, SEO consists mainly of three subareas; Organic Search Engine Optimization (Organic SEO), Search Engine Marketing (SEM) and Social Media Optimization (SMO). The true nature of how Search Engines operate to crawl and index web contents are hidden behind business secrets owned by the individual search engines operating online, leaving SEO auditors and operators to systematically “try-and-error” test for optimal settings. The first part of this thesis unfolds the SEO theory obtained from online sources, acclaimed literature and articles to discover settings in which SEO auditors and operator may use as tools to improve online visibility and accessibility on live websites to search engines. The second part sets on forming a theory driven work model (called the “PS Model”) to systematically work with SEO; structure for implementations and ways to measure the improvements. Third part of the thesis evaluates the PS model using a case study where the model is implemented upon. The case study uses a website (in this thesis referred to as “BMG”) owned by a company active in the biotechnological research and development field situated in Sweden (in this thesis referred to as “BSG”), which at the start of January 2010 was in need of SEO improvements as the relationship between the search engine Google had somewhat stagnated leaving several vital documents outside of Google’s indexing and the relevancy between performed search quires and site-wide keywords had been lowered. The focus of this thesis reside on bringing forth a work model taking in essential parts of SEO (Organic SEO, SEM and SMO), implementing it on the BMG platform to improve the website’s online visibility and accessibility to search engines (mainly focusing on Google), thus enhancing and solving the stagnated situation identified as such in January 2010 by the BMG site-owners – consequently validating the PS Model. In May 2010 it was shown that the PS model did improve site-wide indexing at Google and search queries containing the main set of keywords in use of BMG was improved in terms of relevancy (higher placing on search result pages).
24

Sökmotoroptimering : Metoder för att förbättra sin placering i Googles sökresultat

Allard, Sebastian, Nilsson, Björn January 2010 (has links)
This paper is a literature study on search engine optimization (SEO) considering the leader of the search engine market: Google. There´s an introductory background description of Google and its methods of  crawling the Internet and indexing the web pages, along with a brief review of the famous PageRank algorithm. The purpose of this paper is to describe the major important methods for improved rankings on Google´s result lists. These methods could be categorized as on-page methods tied to the website to be optimized or off-page methods that are external to the website such as link development. Furthermore the most common unethical methods are described, known as “black hat”, which is the secondary purpose of the text. The discussion that follows concerns the practical implications of SEO and personal reflections of the matter. Finally  there´s a quick view of the expanding market of handheld devices connected to the Internet and mobile search as an initial area of research. / Denna uppsats är en litteraturstudie om ämnet sökmotoroptimering (SEO) som behandlar ledaren bland sökmotorer: Google. Det finns en introducerande bakgrund som beskriver Google och dess metoder för ”crawling” och indexering av webbplatser, tillsammans med en kort genomgång av den  välkända  PageRank-algoritmen. Syftet med denna uppsats är att beskriva de centrala metoderna för förbättrad rankning i Googles träffresultat. Dessa metoder kan kategoriseras som ”on-page”-metoder, som är knutna till webbplatsen, eller ”off-page”-metoder, som är externa, exempelvis  länkförvärvning. Vidare kommer de vanligaste oetiska metoderna att beskrivas, kända som ”black hat”, som är det andra syftet med denna text. Diskussionen som följer behandlar de praktiska implikationerna av SEO och personliga reflektioner i frågan. Avslutningsvis  berör vi  den expanderade marknaden av handhållen utrustning med Internetuppkoppling och mobil sökning som ett kommande forskningsområde.
25

Decentralized Web Search

Haque, Md Rakibul 08 June 2012 (has links)
Centrally controlled search engines will not be sufficient and reliable for indexing and searching the rapidly growing World Wide Web in near future. A better solution is to enable the Web to index itself in a decentralized manner. Existing distributed approaches for ranking search results do not provide flexible searching, complete results and ranking with high accuracy. This thesis presents a decentralized Web search mechanism, named DEWS, which enables existing webservers to collaborate with each other to form a distributed index of the Web. DEWS can rank the search results based on query keyword relevance and relative importance of websites in a distributed manner preserving a hyperlink overlay on top of a structured P2P overlay. It also supports approximate matching of query keywords using phonetic codes and n-grams along with list decoding of a linear covering code. DEWS supports incremental retrieval of search results in a decentralized manner which reduces network bandwidth required for query resolution. It uses an efficient routing mechanism extending the Plexus routing protocol with a message aggregation technique. DEWS maintains replica of indexes, which reduces routing hops and makes DEWS robust to webservers failure. The standard LETOR 3.0 dataset was used to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. The results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in the presence of heavy query load.
26

Designing A Better Internet Search Engine Based On Information Foraging Theory

Lee, Szeyin 01 January 2014 (has links)
The first part of the thesis focuses on Information Foraging Theory which was developed by Peter Pirolli, a cognitive scientist from Intelligent Systems Lab at Palo Alto Research Center, to understand how human search in an information environment (Pirolli 1995). The theory builds upon the optimal foraging theory in behavioral ecology, which assumes that people adapt and optimize their information seeking behavior to maximize the success of accomplishing the task goals by selectively choosing paths based on the expected utility from the information cues. The expected utility in Information Foraging Theory is called Information Scent. The second part is to design and build a new way to visualize search engine results in a graphical way that incorporates the concept of information scent to make search experience more efficient for users. The end result of the project will be an improved visualization of search results, obtained by using Google’s Application programming interface (API), latent semantic analysis, and data visualization methods to present a semantics-based visualization of the search results. The proposed design is developed to increase information scent for relevant results and shorten the foraging path to reach the search goal by presenting users with fewer but more valuable proximal cues, thus making search a more human-centered experience.
27

Mining Clickthrough Data To Improve Search Engine Results

Veilumuthu, Ashok 05 1900 (has links) (PDF)
In this thesis, we aim at improving the search result quality by utilizing the search intelligence (history of searches) available in the form of click-through data. We address two key issues, namely 1) relevance feedback extraction and fusion, and 2) deciphering search query intentions. Relevance Feedback Extraction and Fusion: The existing search engines depend heavily on the web linkage structure in the form of hyperlinks to determine the relevance and importance of the documents. But these are collective judgments given by the page authors and hence, prone to collaborated spamming. To overcome the spamming attempts and language semantic issues, it is also important to incorporate the user feedback on the documents' relevance. Since users can be hardly motivated to give explicit/direct feedback on search quality, it becomes necessary to consider implicit feedback that can be collected from search engine logs. Though a number of implicit feedback measures have been proposed in the literature, we have not been able to identify studies that aggregate those feedbacks in a meaningful way to get a final ranking of documents. In this thesis, we first evaluate two implicit feedback measures namely 1) click sequence and 2) time spent on the document for their content uniqueness. We develop a mathematical programming model to collate the feedbacks collected from different sessions into a single ranking of documents. We use Kendall's τ rank correlation to determine the uniqueness of the information content present in the individual feedbacks. The experimental evaluation on top 30 select queries from an actual search log data confirms that these two measures are not in perfect agreement and hence, incremental information can potentially be derived from them. Next, we study the feedback fusion problem in which the user feedbacks from various sessions need to be combined meaningfully. Preference aggregation is a classical problem in economics and we study a variation of it where the rankers, i.e., the feedbacks, possess different expertise. We extend the generalized Mallows' model to model the feedback rankings given in user sessions. We propose a single stage and two stage aggregation framework to combine different feedbacks into one final ranking by taking their respective expertise into consideration. We show that the complexity of the parameter estimation problem is exponential in number of documents and queries. We develop two scalable heuristics namely, 1) a greedy algorithm, and 2) a weight based heuristic, that can closely approximate the solution. We also establish the goodness of fit of the model by testing it on actual log data through log-likelihood ratio test. As the independent evaluation of documents is not available, we conduct experiments on synthetic datasets devised appropriately to examine the various merits of the heuristics. The experimental results confirm the possibility of expertise oriented aggregation of feedbacks by producing orderings better than both the best ranker as well as equi-weight aggregator. Motivated with this result, we extend the aggregation framework to hold infinite rankings for the meta-search applications. The aggregation results on synthetic datasets are found to be ensuring the extension fruitful and scalable. Deciphering Search Query Intentions: The search engine often retrieves a huge list of documents based on their relevance scores for a given query. Such a presentation strategy may work if the submitted query is very specific, homogeneous and unambiguous. But many a times it so happen that the queries posed to the search engine are too short to be specific and hence ambiguous to identify clearly the exact information need, (eg. "jaguar"). These ambiguous and heterogeneous queries invite results from diverse topics. In such cases, the users may have to sift through the entire list to find their needed information and that could be a difficult task. Such a task can be simplified by organizing the search results under meaningful subtopics, which would help the users to directly move on to their topic of interest and ignore the rest. We develop a method to determine the various possible intentions of a given short generic and ambiguous query using information from the click-through data. We propose a two stage clustering framework to co-cluster the queries and documents into intentions that can readily be presented whenever it is demanded. For this problem, we adapt the spectral bipartite partitioning by extending it to automatically determine the number of clusters hidden in the log data. The algorithm has been tested on selected ambiguous queries and the results demonstrate the ability of the algorithm in distinguishing among the user intentions.
28

Decentralized Web Search

Haque, Md Rakibul 08 June 2012 (has links)
Centrally controlled search engines will not be sufficient and reliable for indexing and searching the rapidly growing World Wide Web in near future. A better solution is to enable the Web to index itself in a decentralized manner. Existing distributed approaches for ranking search results do not provide flexible searching, complete results and ranking with high accuracy. This thesis presents a decentralized Web search mechanism, named DEWS, which enables existing webservers to collaborate with each other to form a distributed index of the Web. DEWS can rank the search results based on query keyword relevance and relative importance of websites in a distributed manner preserving a hyperlink overlay on top of a structured P2P overlay. It also supports approximate matching of query keywords using phonetic codes and n-grams along with list decoding of a linear covering code. DEWS supports incremental retrieval of search results in a decentralized manner which reduces network bandwidth required for query resolution. It uses an efficient routing mechanism extending the Plexus routing protocol with a message aggregation technique. DEWS maintains replica of indexes, which reduces routing hops and makes DEWS robust to webservers failure. The standard LETOR 3.0 dataset was used to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. The results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in the presence of heavy query load.
29

Plánovač spojení ve městě / Urban transport planner

Pokorný, Tomáš January 2017 (has links)
Travelling in the city is a part of everyday life for many people. It is sometimes difficult to choose the right combination of walking and public transport especially in unfamiliar parts of the city. We processed publicly available data and made a search engine for multimodal paths. The search engine was designed to be able to personalise results according to user needs and could be used as a web application or a shared library.
30

Internetový marketing se zaměřením na SEO a SEM / Internet marketing with focuse on SEO and SEM

Hanušová, Kateřina January 2008 (has links)
The thesis defines and describes tools of Internet marketing. Within the frame of these tools it describes search engine marketing and website search engine optimization in detail. This information is presented in context of real e-shop and it focuses on SEO and SEM tools in real use.

Page generated in 0.0536 seconds