• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 104
  • 21
  • 9
  • 7
  • 7
  • 7
  • 7
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 185
  • 185
  • 112
  • 62
  • 58
  • 52
  • 39
  • 38
  • 24
  • 23
  • 23
  • 19
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Optimal design of experiments for emerging biological and computational applications

Ferhatosmanoglu, Nilgun. January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Full text release at OhioLINK's ETD Center delayed at author's request
102

Information extraction from unstructured web text /

Popescu, Ana-Maria, January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (leaves 129-139).
103

Επέκταση υπάρχουσας μηχανής αναζήτησης για δεικτοδότηση οποιωνδήποτε εγγράφων χρηστών

Φραντζής, Θρασύβουλος 08 March 2010 (has links)
Oι πληροφορίες που τροφοδοτουν τη βάση δεδομένων των Μηχανών Αναζήτησης προέρχονται από τον Παγκόσμιο Ιστό. Ένα τρέχων ζητούμενο στο πεδίο έρευνας των Μηχανών Αναζήτησης είναι η ανάπτυξη λογισμικού με σκοπό να δίνει την δυνατότητα στον χρήστη να δεικτοδοτεί προσωπικά έγγραφα έτσι ώστε παράλληλα να μπορεί να κάνει αναζητήσεις για εύρεση πληροφοριών και σε έγγραφα που προέρχονται από τον Παγκόσμιο Ιστό αλλά και σε προσωπικά του έγγραφα όλα δεικτοδοτημένα σε μία βάση. Αυτό είναι και το κύριο πρόβλημα που επιλύουμε στην παρούσα εργασία. Με την δυνατότητα αυτήν ουσιαστικά ενοποιείται η διαδικασία της αναζήτησης πληροφοριών στις δυο διαφορετικές πηγές πληροφοριών, τα έγγραφα του Παγκόσμιου Ιστού και τα προσωπικά έγγραφα του χρήστη. / -
104

Δημιουργία μηχανής αναζήτησης προσώπων στο social web

Καλόγηρος, Γεώργιος 07 April 2011 (has links)
Στην εργασία αυτή υλοποιήθηκε μια μηχανή αναζήτησης προσώπων στο Social web. Η αναζήτηση γίνεται σε ιστοσελίδες κοινωνικής δικτύωσης όπως το Twitter, το Myspace, και το Flickr με βάση το όνομα του χρήστη ή το ονοματεπώνυμό του. Η αναζήτηση αυτή επεκτείνεται και σε ιστολόγια που βρίσκονται στον παγκόσμιο ιστό. Ύστερα βρίσκουμε σε ποιες από τις παραπάνω ιστοσελίδες έχει λογαριασμό το προς αναζήτηση άτομο και παραθέτουμε τη διεύθυνση του προφίλ του. Εάν του ανήκει κάποιο ιστολόγιο ή συμμετέχει σε κάποιο άλλο, αποθηκεύουμε στη βάση δεδομένων τα Feeds τα οποία έχει δημιουργήσει. Με τον ίδιο τρόπο ενεργούμε εάν το συγκεκριμένο άτομο έχει λογαριασμό στην ιστοσελίδα Twitter. / In this work we materialized a search engine of persons in social web. The search involves web pages on social networking such as Twitter, Myspace and Flickr, using as a base the username or his full name. This search also extends in blogs that are to be found in the WWW. Then, we store the feeds we mine from the users' accounts.
105

Is trust in SEM an intergenerational trait? : A study of sponsored links and generational attitudes towards them

Fredlund, Jesper, Biedron, Timmy January 2018 (has links)
Title: Is trust in SEM an intergenerational trait? Date: 2018-05-22 Level: Bachelor Thesis in International Marketing Author: Jesper Fredlund 930427 & Timmy Biedron 961128 Supervisor: Henrietta Nilson Problem formulation: How do age correlate with trust and attitude towards SEM on Google in Sweden? Purpose: The purpose of this study is to see if the Swedish Digital Natives are more likely to be trusting search engine marketing, as opposed to the older generations of Digital Immigrants, and by doing this gaining a better understanding of the attitudes towards search engines and search enginemarketing in Sweden. Theoretical framework: The theoretical framework of this paper consists of theories about BannerBlindness, Text Blindness, EHS Theory, Search Engine Marketing, Sponsored Links, Organic Links,Generations. Methodology: This is a quantitative study with 429 respondents in an online survey. It contains Swedish users of search engines divided into groups of those born before 1980 and those born after. Empirical findings: Our study found out that Digital Natives are slightly more likely to favour Search Engine Marketing than Digital Immigrants are. Conclusion: No matter the target of your Search Engine Marketing campaign you should approach itcautiously, since both Digital Natives and Digital Immigrants have been shown to hold a negative bias against these campaigns over organic links. Keywords: SEM, SEA, Search Engines, Search Behaviour, Organic links, Sponsored links.
106

A EVOLUÇÃO COMUNICATIVA DOS MECANISMOS DE BUSCA: DO TELÉGRAFO À WEB SEMÂNTICA / The communicative evolution of search engines: from the telegraph to the semantic web

TOTH, PEDRO HENRIQUE 03 April 2017 (has links)
Submitted by Noeme Timbo (noeme.timbo@metodista.br) on 2017-08-18T19:32:47Z No. of bitstreams: 1 Pedro Henrique Toth.pdf: 1861697 bytes, checksum: a0d2269d6e72369b9e0007421d57c2e5 (MD5) / Made available in DSpace on 2017-08-18T19:32:47Z (GMT). No. of bitstreams: 1 Pedro Henrique Toth.pdf: 1861697 bytes, checksum: a0d2269d6e72369b9e0007421d57c2e5 (MD5) Previous issue date: 2017-04-03 / In times when the use of the internet and the web are becoming more and more present in the life of the human being, it’s possible to observe the increasing rise of search engines in people's daily life, solving problems such as the simple task of locating a musical sheet or even elaborated scientific articles. Speed, accuracy and simplicity make search engines, today, essential tools in the life of the modern and connected human being, thus creating a dependency of them in the man’s life. The objective of this research is to draw a historical line, addressing the main technologies and scientists involved in the technological development that resulted in the search engines. To do so, this research relies on a solid bibliographic review of books and articles about technology and communication, as well as magazines and reports about the subjects involved. The result of the research culminated in a technological perspective, based on the facts and technologies presented, of an integrated search engine that may become part of the human daily life soon. / Em tempos onde o uso da internet e da web se faz cada vez mais presente na vida do ser humano, observa-se a crescente ascensão dos mecanismos de busca no dia a dia das pessoas, resolvendo problemas como a simples tarefa de se localizar uma partitura musical ou ainda elaborados artigos científicos. A velocidade, acurácia e simplicidade fazem dos mecanismos de busca, hoje, ferramentas essenciais na vida do ser humano moderno e conectado, criando assim uma dependência dos mesmos na vida do homem. O objetivo desta pesquisa é traçar uma linha histórica, abordando as principais tecnologias e cientistas envolvidos no desenvolvimento tecnológico que resultou nos mecanismos de busca. Para tanto, esta pesquisa se apoia em uma revisão bibliográfica sólida em livros e artigos sobre tecnologia e comunicação, bem como revistas e reportagens sobre os assuntos envolvidos. O resultado da pesquisa culminou em uma perspectiva tecnológica, baseada nos fatos e tecnologias apresentados, de um mecanismo de busca integrado, que poderá se tornar parte do cotidiano humano em um futuro não muito distante.
107

[en] EFFICIENT WEB PAGE REFRESH POLICIES / [pt] POLÍTICAS EFICIENTES PARA REVISITAÇÃO DE PÁGINAS WEB

CRISTON PEREIRA DE SOUZA 15 July 2010 (has links)
[pt] Uma máquina de busca precisa constantemente revisitar páginas Web para manter seu repositório local atualizado. Uma política de revisitação deve ser empregada para construir um escalonamento de revisitações que mantenha o repositório o mais atualizado possível utilizando os recursos disponíveis. Para evitar sobrecarga de servidores Web, a política de revisitação deve respeitar um tempo mínimo entre requisições consecutivas a um mesmo servidor. Esta regra é chamada restrição de politeness. Devido ao porte do problema, consideramos que uma política de revisitação é eficiente se o tempo médio para escalonar uma revisitação é sublinear no número de páginas do repositório. Neste sentido, quando a restrição de politeness é considerada, não conhecemos política eficiente com garantia teórica de qualidade. Nesta pesquisa investigamos três políticas eficientes que respeitam a restrição de politeness, chamadas MERGE, RANDOM e DELAYED. Fornecemos fatores de aproximação para o nível de atualização do repositório quando empregamos as política MERGE ou RANDOM. Demonstramos que 0,77 é um limite inferior para este fator de aproximação quando empregamos a política RANDOM, e apresentamos uma conjectura de que 0,927 é um limite inferior para este fator de aproximação quando empregamos a política MERGE. As políticas também são avaliadas através da simulação da execução destas políticas para manter o nível de atualização de um repositório contendo 14,5 milhões de páginas Web. Um repositório contendo artigos da Wikipedia também é utilizado nos experimentos, onde podemos observar que a política MERGE apresenta melhores resultados que uma estratégia gulosa natural para este repositório. A principal conclusão desta pesquisa é que existem políticas simples e eficientes para o problema de revisitação de páginas Web, que perdem pouco em termos do nível de atualização do repositório mesmo quando consideramos a restrição de politeness. / [en] A search engine needs to continuously revisit web pages in order to keep its local repository up-to-date. A page revisiting schedule must be defined to keep the repository up-to-date using the available resources. In order to avoid web server overload, the revisiting policy must respect a minimum amount of time between consecutive requests to the same server. This rule is called politeness constraint. Due to the large number of web pages, we consider that a revisiting policy is efficient when the mean time to schedule a revisit is sublinear on the number of pages in the repository. Therefore, when the politeness constraint is considered, there are no existing efficient policies with theoretical quality guarantees. We investigate three efficient policies that respect the politeness constraint, called MERGE, RANDOM and DELAYED. We provide approximation factors for the repository’s up-to-date level for the MERGE and RANDOM policies. Based on these approximation factors, we devise a 0.77 lower bound for the approximation factor provided by the RANDOM policy and we present a conjecture that 0.927 is a lower bound for the approximation factor provided by the MERGE policy. We evaluate these policies through simulation experiments which try to keep a repository with 14.5 million web pages up-to-date. Additional experiments based on a repository with Wikipedia’s articles concluded that the MERGE policy provides better results than a natural greedy strategy. The main conclusion of this research is that there are simple and efficient policies that can be applied to this problem, even when the politeness constraint must be respected, resulting in a small loss of repository’s up-to-date level.
108

Visibility of e-commerce websites to search engines: a comparison between text-based and graphic-based hyperlinks

Ngindana, Mongezi January 2006 (has links)
DISSERTATION Submitted in partial fulfilment of the requirements for the degree MAGISTER TECHNOLOGIAE in INFORMATION TECHNOLOGY in the FACULTY OF BUSINESS INFORMATICS at the CAPE PENINSULA UNIVERSITY OF TECHNOLOGY 2006 / Research has shown that most website developers first build a website and only later focus on the ‘searchability’ and ‘visibility’ of the website. Companies spend large amounts of money on the development of a website which sadly cannot be indexed by search engines, is rejected by directory editors and which is furthermore invisible to crawlers. The primary objective of this dissertation is to compare and report on the impact of text-based versus graphic-based hyperlinks on website visibility. The method employed in the research was to develop two e-Commerce based websites with the same functionality, contents and keywords, however utilising different navigation schemes. The one website had all hyperlinks coded in text-phrases, while the other embedded the hyperlinks in graphics. Both websites were submitted to the same search engines at the same time. A period of eight months was allowed to ensure that the websites drew sufficient ‘hits’ to enable a comparative analysis to be conducted. Two industry standard website ranking programs were used to monitor how the two websites feature in the search engine rankings. Graphs as well as text-based reports produced by the ranking programs and the t-test were used to compare and analyse the results. Evidence based on the reviewed literature indicated that there are conflicting reports on the impact of text as opposed to graphic hyperlinks on website visibility. However, there is unsupported evidence that text hyperlinks achieved higher rankings than graphics-based hyperlinks. Although the ‘human website browsers’ find a certain amount of graphical aids conducive to easier navigation, ‘search engine crawlers’ find many of these same graphic aids impossible to index. The study supported that the graphic-based website ranked higher than the text-based website, which calls for a balance to be found between these two extremes. This balance would satisfy both ‘human website browsers’ and ‘search engine crawlers’. It is posited by this author that this dissertation provides website designers with the abilities to achieve such a balance. KEYWORDS: search engines, hyperlinks, text, graphics, visibility, navigation, ecommerce, design.
109

Search engine optimisation elements' effect on website visibility: the Western Cape real estate SMME sector

Visser, Eugene Bourbon January 2006 (has links)
Thesis submitted in fulfilment of the requirements for the degree Magister Technologiae in Information Technology in the Faculty of Informatics and Design 2006 / The primary objective of this research project was to determine whether search engine optimisation elements as specified in the Chambers model, affect real estate website visibility. In South Africa, real estate companies predominantly function as SMMEs and are therefore as vulnerable to failure as any other SMME in the country. In order for SMMEs to reduce the possibility of failure, they need to re-evaluate their use of the Internet, as it could assist in their survival. The traditional company structure is no longer sufficient to ensure market reward. The reality is that users are rapidly adapting to the technology available. The Internet is fast becoming a communication, commerce and marketing medium that is changing business globally. Real estate SMMEs are unable to adapt to e-commerce in its purest form, however, they can make effective use of e-marketing. Static websites are used for that specific purpose. A marketing strategy is imperative to the survival of a company, whereby the firm is able to create and maintain a competitive advantage in a cluttered marketplace. Regrettably, hosting a website on the Internet is not enough. Searchers tend not to view search results beyond the second page - 30 results at the most. It becomes evident that companies should ensure that their own website ranks as high as possible on the search engine result page. This in turn should sufficiently market the company. Search engine optimisation involves designing or modifying websites in order to improve search engine result page ranking. The elements as specified in the Chambers model are extensively elaborated on in the literature analysis. The methodology consisted of two stages - a questionnaire and empirical experiments. A quantitative research approach was adopted for both of these components. The primary objective of the questionnaire was to obtain search phrases from the public when searching for real estate online. The search phrases were then used in the experiments, testing the visibility of predetermined websites, which were based on a pre- post- test control group design methodology. In this instance, the before group consisted of five different websites from five different real estate companies which have been hosted on the Internet for a duration of no less than three months. The Chambers model was used in the development of five new optimised websites, one for each company. The new websites were hosted on the Internet for 27 days, in order to give search engines the opportunity to index them. The results documented were then compared in order to draw a conclusion. A total of 121 key search phrases were obtained. The results from the old and new websites were applied to a process which produced a combination of results known as the ‘quality factor’. The quality factor indicated either a visibility improvement or visibility deterioration with regard to the old and new company’s website. In addition to this, this author compared the optimised website which obtained the best visibility improvement with the website that obtained the highest deterioration in visibility. As a result, the elements specified in the Chambers model were re-evaluated whereby new elements that had not been specified in the original model were identified. Based on the new findings, this author developed a new search engine optimisation model as a secondary objective in this thesis.
110

The effect webpage body keywords location has on ranking in search engines results: an empirical study

Kritzinger, Wouter Thomas January 2005 (has links)
DISSERTATION Submitted in partial (50%) fulfilment of the requirements for the degree MAGISTER TECHNOLOGIAE in BUSINESS INFORMATION SYSTEMS in the FACULTY OF BUSINESS INFORMATICS at the CAPE PENINSULA UNIVERSITY OF TECHNOLOGY 2005 / The growth of the World Wide Web has spawned a wide collection of new information sources, which has also left users with the daunting task of determining which sources are valid. Most users rely on the web because of the low cost of information retrieval. Other advantages of the web include the convenience in terms of time and access as well as the ability to easily record results. It is also claimed that the web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilise search engines to locate information on the Internet. This of course places emphasis on the underlying importance of webpages being listed on search engines indices. It is in the interest of any company to pursue a strategy for ensuring a high search engine ranking for their e-Commerce website. This will result in more visits from users and possibly more sales. One of the strategies for ensuring a high search engine ranking is the placement of keywords in the body text section of a webpage. Empirical evidence that the placement of keywords in certain areas of the body text will have an influence on the websites’ visibility to search engines could not be found. The author set out to prove or disprove that keywords in the body text of a webpage will have a measurable effect on the visibility of a website to search engine crawlers. From the findings of this research it will be possible to create a guide for e- Commerce website authors on the usage, placing and density of keywords within their websites. This guide, although it will only focus on one aspect of search engine visibility, could help e-Commerce websites to attract more visitors and to become more profitable.

Page generated in 0.06 seconds