Spelling suggestions: "subject:"eeb 3research"" "subject:"eeb 1research""
101 |
SEO optimalizace webových stránek / Search Engine OptimizationKomárek, Miroslav January 2009 (has links)
The goal of this thesis is to analyze methods and techniques applying to Search Engine Optimization (SEO). The thesis combines theoretical knowledge extracted from scientific sources and from author's experiences. Search Engine Optimization methods are divided into two major chapters according to sphere of their activity. On Page Factors -- factors occuring on web pages and Off Page Factors -- factors affecting web pages but occuring out of web pages. There is a case study at the end of this thesis. Acquired knowledge are used there. The case study is based on real-world project and includes detailed results and economical indicators.
|
102 |
L'implicite dans la requête adressée à un moteur de recherche Web / The implicit in query sent to Web engineZouhri, Talal 04 July 2013 (has links)
L'objet de notre étude est la requête adressée à un moteur de recherche Web par un usager dans le cadre d'une recherche d'information. Nous souhaitons mieux comprendre l'étape de la recherche d'information située entre le besoin d'information et la formulation / reformulation de la requête. Notre thèse est articulée autour de deux hypothèses de recherche. D'abord, nous avons émis l'hypothèse qu'une requête adressée à un moteur de recherche Web peut receler de l'implicite. Ensuite, nous avons considéré que ce contenu implicite peut être utilisé par les usagers dans des tactiques de formulation / reformulation de la requête. Nous avons notamment analysé le discours de 61 étudiants que nous avons interrogés sur leur intention de recherche. Ce discours était principalement constitué d'un niveau sémantique (qui décrit le thème de la recherche) et d'un niveau pragmatique (composé d'un but seul ou d'un but ou plusieurs sous-but(s)). Les termes représentant le niveau sémantique pouvaient être complètement ou partiellement formulés dans la requête, mais ceux représentant le niveau pragmatique n'étaient généralement pas formulés. Cette situation de communication s'apparente à une négociation entre le moteur de recherche et l'usager. Le moteur de recherche tente de disposer d'éléments sur le besoin d'information de l'usager et ce dernier tente d'obtenir à partir d'un contenu explicitement formulé dans sa requête, un ensemble d'information afin de progresser sur la résolution de son problème / The object of our study is the query, sent to a Web search engine, by an Internet user. We aim to reach a better understanding of the phase of information seeking located between the information need and the query formulation. Our thesis is based on two core hypotheses, all related to the query. Firstly, we considered that the query expressed partially the user’s information need and therefore contain an implicit part. Secondly, we also considered that the implicit part can be used by the users in their query formulation tactics. We notably analyzed 61 students’ speech about their search intent. The speech was based mainly on a semantic level (the terms representing the topic of the research) and a pragmatic level (composed of an only purpose or purpose and of many under purposes). The terms representing the semantic level could be rather completely or partially formulated in the query but those representing the pragmatic level weren’t formulated. This situation of communication is similar to a negotiation between the Web search engine and the user. The search engine Web tries to have elements on user’s information need and the user tries to obtain, from a contents explicitly formulated in his query, a set of information in order to progress on his resolution of its problem
|
103 |
EntertainicsGarza, Jesus Mario Torres 01 January 2003 (has links)
Entertainics is a web-based software application used to gather information about DVD players from several web-sites on the internet. The purpose of this software is to help users search for DVD players in a faster and easier way, by avoiding the navigation on every web-site that contains this product.
|
104 |
網頁地理關聯性之分析與研究 / The Analysis of Geographic Relations of Internet Information黃建達, Huang, Jian Da Unknown Date (has links)
近幾年來,有關地理資訊的網頁搜尋越來越受到重視。傳統的網頁搜尋引擎無法反應使用者查詢和網頁文件之間的地理關聯性。在一些情況下,我們希望網路搜尋引擎能夠考慮使用者查詢與網頁文件間的地理相關性,以提升搜尋的準確度。
我們的研究透過包圍矩形模型(Bounding Rectangle Model;BR Model)以搜尋與使用者查詢之地理相關程度較高的網頁文件。 使用者僅需輪入文字的查詢,即能得到相符結果的網頁文件。首先,我們建立一個地名辭典以找出使用者查詢與網頁文件內出現的地名及空間資料,接著我們利用空間資料建立空間索引項(spatial index term)集合,用來表示使用者查詢與網頁文件內的地理範圍,最後再透過使用者查詢與網頁文件的空間索引項集合計算兩者之間的地理相似程度,以找出與使用者查詢有較高地理關聯性的網頁文件。
此篇論文的貢獻在於我們提一套完整資訊檢索模型架構的方法,分析使用者查詢與網頁文件之間的地理關聯性,使用者藉由輸入文字查詢即能得到相符地理關聯性的網頁文件。 / Geographic web search becomes increasingly popular in recent years. Traditional web search engine, such as Google and Yahoo, can not accommodate geographic relevance between user queries and internet documents. Hence, they can not retrieve geographic related information from user queries. However, in many cases, the geographic relevance between user queries and internet documents could enhance the accuracy of this type of searches.
In this thesis, we propose a mechanism that uses the Bounding Rectangle Model (BR Model) to retrieve geographic relevant internet documents in response to user queries. Users provide only the conventional input queries (keywords) and our search engine will return the geographic relevant results. Our method can be classified into the following three steps. In the first step, we create a gazetteer and use it to relate the user query’s geographic terms in internet documents. In the next step, we use the spatial data to build a set of spatial index terms that represents the geographic scope of user query and internet documents. And then we use these spatial index terms to calculate degree of geographic similarity between user query and internet documents to identify highly relevant geographic internet documents.
We implemented a prototype search engine using our approach. The experiment results show that we can successfully retrieve geographic relevant data through this mechanism and provide more accurate search results.
|
105 |
K problematice kolokability u adjektivních vstupů ve Velkém německo-českém akademickém slovníku (VNČAS) / On Collocation Problems in Adjective Entries. Problematic Cases in the Large German-Czech Academic DictionaryOliva, Jakub January 2012 (has links)
The aim of the given work was to point out the problems of the collocability of german adjectives in dictionaries and on the basis of the executed analysis to suggest possible solutions which could be exploited in the entries. The primary information sources were the german dictionary Duden and the german-czech dictionary Siebenschein, the secondary ones were the internet corpus DeReKo and the web search engine Google. The dictionary collocations should not be chosen by the criterion of their quantity, but by the criterion of their usefulness. They should exemplify the differences between both languages and they should be used as the assure instance for the dictionary user.
|
106 |
Algoritmo rastreador web especialista nuclear / Nuclear expert web crawler algorithmReis, Thiago 12 November 2013 (has links)
Nos últimos anos a Web obteve um crescimento exponencial, se tornando o maior repositório de informações já criado pelo homem e representando uma fonte nova e relevante de informações potencialmente úteis para diversas áreas, inclusive a área nuclear. Entretanto, devido as suas características e, principalmente, devido ao seu grande volume de dados, emerge um problema desafiador relacionado à utilização das suas informações: a busca e recuperação informações relevantes e úteis. Este problema é tratado por algoritmos de busca e recuperação de informação que trabalham na Web, denominados rastreadores web. Neste trabalho é apresentada a pesquisa e desenvolvimento de um algoritmo rastreador que efetua buscas e recupera páginas na Web com conteúdo textual relacionado ao domínio nuclear e seus temas, de forma autônoma e massiva. Este algoritmo foi projetado sob o modelo de um sistema especialista, possuindo, desta forma, uma base de conhecimento que contem tópicos nucleares e palavras-chave que os definem e um mecanismo de inferência constituído por uma rede neural artificial perceptron multicamadas que efetua a estimação da relevância das páginas na Web para um determinado tópico nuclear, no decorrer do processo de busca, utilizando a base de conhecimento. Deste modo, o algoritmo é capaz de, autonomamente, buscar páginas na Web seguindo os hiperlinks que as interconectam e recuperar aquelas que são mais relevantes para o tópico nuclear selecionado, emulando a habilidade que um especialista nuclear tem de navegar na Web e verificar informações nucleares. Resultados experimentais preliminares apresentam uma precisão de recuperação de 80% para o tópico área nuclear em geral e 72% para o tópico de energia nuclear, indicando que o algoritmo proposto é efetivo e eficiente na busca e recuperação de informações relevantes para o domínio nuclear. / Over the last years the Web has obtained an exponential growth, becoming the largest information repository ever created and representing a new and valuable source of potentially useful information for several topics and also for nuclear-related themes. However, due to the Web characteristics and, mainly, because of its huge data volume, finding and retrieving relevant and useful information are non-trivial tasks. This challenge is addressed by web search and retrieval algorithms called web crawlers. This work presents the research and development of a crawler algorithm able to search and retrieve webpages with nuclear-related textual content, in autonomous and massive fashion. This algorithm was designed under the expert systems model, having, this way, a knowledge base that contains a list of nuclear topics and keywords that define them and an inference engine composed of a multi-layer perceptron artificial neural network that performs webpages relevance estimates to some knowledge base nuclear topic while searching the Web. Thus, the algorithm is able to autonomously search the Web by following the hyperlinks that interconnect the webpages and retrieving those that are more relevant to some predefined nuclear topic, emulating the ability a nuclear expert has to browse the Web and evaluate nuclear information. Preliminary experimental results show a retrieval precision of 80% for the nuclear general domain topic and 72% for the nuclear power topic, indicating that the proposed algorithm is effective and efficient to search the Web and to retrieve nuclear-related information.
|
107 |
The use of browser based resources for literature searches in the postgraduate cohort of the Faculty of Humanities, Development and Social Sciences (HDSS) at the Howard College Campus of the University of KwaZulu-Natal.Woodcock-Reynolds, Hilary Julian. January 2011 (has links)
The research reflected here examined in depth how one cohort of learners viewed and engaged
in literature searches using web browser based resources. Action research was employed using a
mixed methods approach. The research started with a survey followed by interviews and a
screencast examining practice based on a series of search related exercises. These were
analysed and used as data to establish what deficits in using the web to search for literature
existed in the target group. Based on the analysis of these instruments, the problem was
redefined and a workshop intended to help remediate deficiencies uncovered was run.
Based on this a recommendation is made that a credit bearing course teaching digital research
literacy be made available which would include information literacy as a component. / Thesis (M.A.)-University of KwaZulu-Natal, Durban, 2011.
|
108 |
Algoritmo rastreador web especialista nuclear / Nuclear expert web crawler algorithmThiago Reis 12 November 2013 (has links)
Nos últimos anos a Web obteve um crescimento exponencial, se tornando o maior repositório de informações já criado pelo homem e representando uma fonte nova e relevante de informações potencialmente úteis para diversas áreas, inclusive a área nuclear. Entretanto, devido as suas características e, principalmente, devido ao seu grande volume de dados, emerge um problema desafiador relacionado à utilização das suas informações: a busca e recuperação informações relevantes e úteis. Este problema é tratado por algoritmos de busca e recuperação de informação que trabalham na Web, denominados rastreadores web. Neste trabalho é apresentada a pesquisa e desenvolvimento de um algoritmo rastreador que efetua buscas e recupera páginas na Web com conteúdo textual relacionado ao domínio nuclear e seus temas, de forma autônoma e massiva. Este algoritmo foi projetado sob o modelo de um sistema especialista, possuindo, desta forma, uma base de conhecimento que contem tópicos nucleares e palavras-chave que os definem e um mecanismo de inferência constituído por uma rede neural artificial perceptron multicamadas que efetua a estimação da relevância das páginas na Web para um determinado tópico nuclear, no decorrer do processo de busca, utilizando a base de conhecimento. Deste modo, o algoritmo é capaz de, autonomamente, buscar páginas na Web seguindo os hiperlinks que as interconectam e recuperar aquelas que são mais relevantes para o tópico nuclear selecionado, emulando a habilidade que um especialista nuclear tem de navegar na Web e verificar informações nucleares. Resultados experimentais preliminares apresentam uma precisão de recuperação de 80% para o tópico área nuclear em geral e 72% para o tópico de energia nuclear, indicando que o algoritmo proposto é efetivo e eficiente na busca e recuperação de informações relevantes para o domínio nuclear. / Over the last years the Web has obtained an exponential growth, becoming the largest information repository ever created and representing a new and valuable source of potentially useful information for several topics and also for nuclear-related themes. However, due to the Web characteristics and, mainly, because of its huge data volume, finding and retrieving relevant and useful information are non-trivial tasks. This challenge is addressed by web search and retrieval algorithms called web crawlers. This work presents the research and development of a crawler algorithm able to search and retrieve webpages with nuclear-related textual content, in autonomous and massive fashion. This algorithm was designed under the expert systems model, having, this way, a knowledge base that contains a list of nuclear topics and keywords that define them and an inference engine composed of a multi-layer perceptron artificial neural network that performs webpages relevance estimates to some knowledge base nuclear topic while searching the Web. Thus, the algorithm is able to autonomously search the Web by following the hyperlinks that interconnect the webpages and retrieving those that are more relevant to some predefined nuclear topic, emulating the ability a nuclear expert has to browse the Web and evaluate nuclear information. Preliminary experimental results show a retrieval precision of 80% for the nuclear general domain topic and 72% for the nuclear power topic, indicating that the proposed algorithm is effective and efficient to search the Web and to retrieve nuclear-related information.
|
109 |
Visualização em nuvens de texto como apoio à busca exploratória na web / Supporting web search with visualization in text cloudsMarcia Severo Lunardi 27 March 2008 (has links)
A presente dissertação é o resultado de uma pesquisa que avalia as vantagens da utilização de nuvens de texto para apresentar os resultados de um sistema de busca na web. Uma nuvem de texto é uma técnica de visualização de informações textuais e tem como principal objetivo proporcionar um resumo de um ou mais conteúdos em
uma única tela. Em uma consulta na web, os resultados aparecem listados em diversas páginas. Através de uma nuvem de texto integrada a um sistema de busca é possível
a visualização de uma síntese, de um resumo automático, do conteúdo dos resultados listados em várias páginas sem que elas tenham que ser percorridas e os sites acessados
individualmente. A nuvem de texto nesse contexto funciona como uma ferramenta auxiliar para que o usuário possa gerenciar a grande carga de informação que é disponibilizada
nos resultados das consultas. Dessa forma os resultados podem ser vistos em contexto e, ainda, as palavras que compõem a nuvem, podem ser utilizadas como palavras-chave adicionais para complementar uma consulta inicial. Essa pesquisa foi desenvolvida em duas fases. A primeira consistiu no desenvolvimento de uma aplicação integrada a um sistema de buscas para mostrar seus resultados em nuvens de texto. A
segunda fase foi a avaliação dessa aplicação, focada principalmente em buscas exploratórias, que são aquelas em que os objetivos dos usuários não são bem definidos ou o
conhecimento sobre o assunto pesquisado é vago. / This dissertation presents the results of a research that evaluates the advantages of text clouds to the visualization of web search results. A text cloud is a visualization
technique for texts and textual data in general. Its main purpose is to enhance comprehension of a large body of text by summarizing it automatically and is generally applied for managing information overload. While continual improvements in search technology have made it possible to quickly find relevant information on the web, few search engines do anything to organize or to summarize the contents of such responses beyond ranking the items in a list. In exploratory searches, users may be forced to scroll through many pages to identify the information they seek and are generally not provided with any way to visualize the totality of the results returned. This research is divided in two parts. Part one describes the development of an application that generates text clouds for the summarization of search results from the standard result list provided by the Yahoo search engine. The second part describes the evaluation of this application.
Adapted to this specific context, a text cloud is generated from the text of the first sites returned by the search engine according to its relevance algorithms. The benefit of this
application is that it enables users to obtain a visual overview of the main results at once. From this overview the users can obtain keywords to navigate to potential relevant subjects that otherwise would be hidden deep down in the response list. Also, users can realize by visualizing the results in context that his initial query term was not the best choice.
|
110 |
Visualização em nuvens de texto como apoio à busca exploratória na web / Supporting web search with visualization in text cloudsMarcia Severo Lunardi 27 March 2008 (has links)
A presente dissertação é o resultado de uma pesquisa que avalia as vantagens da utilização de nuvens de texto para apresentar os resultados de um sistema de busca na web. Uma nuvem de texto é uma técnica de visualização de informações textuais e tem como principal objetivo proporcionar um resumo de um ou mais conteúdos em
uma única tela. Em uma consulta na web, os resultados aparecem listados em diversas páginas. Através de uma nuvem de texto integrada a um sistema de busca é possível
a visualização de uma síntese, de um resumo automático, do conteúdo dos resultados listados em várias páginas sem que elas tenham que ser percorridas e os sites acessados
individualmente. A nuvem de texto nesse contexto funciona como uma ferramenta auxiliar para que o usuário possa gerenciar a grande carga de informação que é disponibilizada
nos resultados das consultas. Dessa forma os resultados podem ser vistos em contexto e, ainda, as palavras que compõem a nuvem, podem ser utilizadas como palavras-chave adicionais para complementar uma consulta inicial. Essa pesquisa foi desenvolvida em duas fases. A primeira consistiu no desenvolvimento de uma aplicação integrada a um sistema de buscas para mostrar seus resultados em nuvens de texto. A
segunda fase foi a avaliação dessa aplicação, focada principalmente em buscas exploratórias, que são aquelas em que os objetivos dos usuários não são bem definidos ou o
conhecimento sobre o assunto pesquisado é vago. / This dissertation presents the results of a research that evaluates the advantages of text clouds to the visualization of web search results. A text cloud is a visualization
technique for texts and textual data in general. Its main purpose is to enhance comprehension of a large body of text by summarizing it automatically and is generally applied for managing information overload. While continual improvements in search technology have made it possible to quickly find relevant information on the web, few search engines do anything to organize or to summarize the contents of such responses beyond ranking the items in a list. In exploratory searches, users may be forced to scroll through many pages to identify the information they seek and are generally not provided with any way to visualize the totality of the results returned. This research is divided in two parts. Part one describes the development of an application that generates text clouds for the summarization of search results from the standard result list provided by the Yahoo search engine. The second part describes the evaluation of this application.
Adapted to this specific context, a text cloud is generated from the text of the first sites returned by the search engine according to its relevance algorithms. The benefit of this
application is that it enables users to obtain a visual overview of the main results at once. From this overview the users can obtain keywords to navigate to potential relevant subjects that otherwise would be hidden deep down in the response list. Also, users can realize by visualizing the results in context that his initial query term was not the best choice.
|
Page generated in 0.072 seconds