• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 6
  • 6
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 71
  • 40
  • 30
  • 24
  • 20
  • 16
  • 16
  • 12
  • 12
  • 11
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Information extraction from unstructured web text /

Popescu, Ana-Maria, January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (leaves 129-139).
42

Google search

Unruh, Miriam, McLean, Cheryl, Tittenberger, Peter, Schor, Dario 30 May 2006 (has links)
After completing this tutorial you will be able to access "Google", conduct a simple search, and interpret the search results.
43

Επέκταση υπάρχουσας μηχανής αναζήτησης για δεικτοδότηση οποιωνδήποτε εγγράφων χρηστών

Φραντζής, Θρασύβουλος 08 March 2010 (has links)
Oι πληροφορίες που τροφοδοτουν τη βάση δεδομένων των Μηχανών Αναζήτησης προέρχονται από τον Παγκόσμιο Ιστό. Ένα τρέχων ζητούμενο στο πεδίο έρευνας των Μηχανών Αναζήτησης είναι η ανάπτυξη λογισμικού με σκοπό να δίνει την δυνατότητα στον χρήστη να δεικτοδοτεί προσωπικά έγγραφα έτσι ώστε παράλληλα να μπορεί να κάνει αναζητήσεις για εύρεση πληροφοριών και σε έγγραφα που προέρχονται από τον Παγκόσμιο Ιστό αλλά και σε προσωπικά του έγγραφα όλα δεικτοδοτημένα σε μία βάση. Αυτό είναι και το κύριο πρόβλημα που επιλύουμε στην παρούσα εργασία. Με την δυνατότητα αυτήν ουσιαστικά ενοποιείται η διαδικασία της αναζήτησης πληροφοριών στις δυο διαφορετικές πηγές πληροφοριών, τα έγγραφα του Παγκόσμιου Ιστού και τα προσωπικά έγγραφα του χρήστη. / -
44

Evaluation of Internet search tools instrument design

Saunders, Tana 03 1900 (has links)
Thesis (MPhil)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: This study investigated Internet search tools / engines to identify desirable features that can be used as a benchmark or standard to evaluate web search engines. In the past, the Internet was thought of as a big spider's web, ultimately connecting all the bits of information. It has now become clear that this is not the case, and that the bow tie analogy is more accurate. This analogy suggests that there is a central core of well-connected pages, with links IN and OUT to other pages, tendrils and orphan pages. This emphasizes the importance of selecting a search tool that is well connected and linked to the central core. Searchers must take into account that not all search tools search the Invisible Web and this will reflect on the search tool selected. Not all information found on the Web and Internet is reliable, current and accurate, and Web information must be evaluated in terms of authority, currency, bias, purpose of the Web site, etc. Different kinds of search tools are available on the Internet, such as search engines, directories, library gateways, portals, intelligent agents, etc. These search tools were studied and explored. A new categorization for online search tools consisting of Intelligent Agents, Search Engines, Directories and Portals / Hubs is suggested. This categorization distinguishes the major differences between the 21 kinds of search tools studied. Search tools / engines consist of spiders, crawlers, robots, indexes and search tool software. These search tools can be further distinguished by their scope, internal or external searches and whether they search Web pages or Web sites. Most search tools operate within a relationship with other search tools, and they often share results, spiders and databases. This relationship is very dynamic. The major international search engines have identifiable search features. The features of Google, Yahoo, Lycos and Excite were studied in detail. Search engines search for information in different ways, and present their results differently. These characteristics are critical to the Recall/Precision ratio. A well-planned search strategy will improve the Precision/Recall ratio and consider the web-user capabilities and needs. Internet search tools/engines is not a panacea for all information needs, and have pros and cons. The Internet search tool evaluation instrument was developed based on desirable features of the major search tools, and is considered a benchmark or standard for Internet search tools. This instrument, applied to three South African search tools, provided insight into the capabilities of the local search tools compared to the benchmark suggested in this study. The study concludes that the local search engines compare favorably with the major ones, but not enough so to use them exclusively. Further research into this aspect is needed. Intelligent agents are likely to become more popular, but the only certainty in the future of Internet search tools is change, change, and change. / AFRIKAANSE OPSOMMING: Hierdie studie het Internetsoekinstrumente/-enjins ondersoek met die doel om gewenste eienskappe te identifiseer wat as 'n standaard kan dien om soekenjins te evalueer. In die verlede is die Internet gesien as 'n groot spinnerak, wat uiteindelik al die inligtingsdeeltjies verbind. Dit het egter nou duidelik geword dat dit glad nie die geval is nie, en dat die strikdas analogie meer akkuraat is. Hierdie analogie stel voor dat daar 'n sentrale kern van goed gekonnekteerde bladsye is, met skakels IN en UIT na ander bladsye, tentakels en weesbladsye. Dit beklemtoon die belangrikheid om die regte soekinstrument te kies, naamlik een wat goed gekonnekteer is, en geskakel is met die sentrale kern van dokumente. Soekers moet in gedagte hou dat nie alle soekenjins in die Onsigbare Web soek nie, en dit behoort weerspieël te word in die keuse van die soekinstrument. Nie alle inligting wat op die Web en Internet gevind word is betroubaar, op datum en akkuraat nie, en Web-inligting moet geëvalueer word in terme van outoriteit, tydigheid, vooroordeel, doel van die Webruimte, ens. Verskillende soorte soekinstrumente is op die Internet beskikbaar, soos soekenjins, gidse, biblioteekpoorte, portale, intelligente agente, ens. Hierdie soekinstrumente is bestudeer en verken. 'n Nuwe kategorisering vir aanlyn soekinstrumente bestaande uit Intelligente Agente, Soekinstrumente, Gidse en Portale/Middelpunte word voorgestel. Hierdie kategorisering onderskei die hoofverskille tussen die 21 soorte soekinstrumente wat bestudeer is. Soekinstrumente/-enjins bestaan uit spinnekoppe, kruipers, robotte, indekse en soekinstrument sagteware. Hierdie soekinstrumente kan verder onderskei word deur hulle omvang, interne of eksterne soektogte en of hulle op Webbladsye of Webruimtes soek. Die meeste soekinstrumente werk in verhouding met ander soekinstrumente, en hulle deel dikwels resultate, spinnekoppe en databasisse. Hierdie verhouding is baie dinamies. Die hoof internasionale soekenjins het soekeienskappe wat identifiseerbaar is. Die eienskappe van Google, Yahoo en Excite is in besonderhede bestudeer. Soekenjins soek op verskillende maniere na inligting, en lê hulle resultate verskillend voor. Hierdie karaktereienskappe is krities vir die Herwinning/Presisie verhouding. 'n Goedbeplande soekstrategie sal die Herwinning/Presisie verhouding verbeter. Internet soekinstrumente/-enjins is nie die wondermiddel vir alle inligtingsbehoeftes nie, en het voor- en nadele. Die Internet soekinstrument evalueringsmeganisme se ontwikkeling is gebaseer op gewenste eienskappe van die hoof soekinstrumente, en word beskou as 'n standaard vir Internet soekinstrumente. Hierdie instrument, toegepas op drie Suid-Afrikaanse soekenjins, het insae verskaf in die doeltreffendheid van die plaaslike soekinstrumente soos vergelyk met die standaard wat in hierdie studie voorgestel word. In die studie word tot die slotsom gekom dat die plaaslike soekenjins gunstig vergelyk met die hoof soekenjins, maar nie genoegsaam sodat hulle eksklusief gebruik kan word nie. Verdere navorsing oor hierdie aspek is nodig. Intelligente Agente sal waarskynlik meer gewild word, maar die enigste sekerheid vir die toekoms van Internet soekinstrumente is verandering, verandering en nogmaals verandering.
45

Visibility of e-commerce websites to search engines: a comparison between text-based and graphic-based hyperlinks

Ngindana, Mongezi January 2006 (has links)
DISSERTATION Submitted in partial fulfilment of the requirements for the degree MAGISTER TECHNOLOGIAE in INFORMATION TECHNOLOGY in the FACULTY OF BUSINESS INFORMATICS at the CAPE PENINSULA UNIVERSITY OF TECHNOLOGY 2006 / Research has shown that most website developers first build a website and only later focus on the ‘searchability’ and ‘visibility’ of the website. Companies spend large amounts of money on the development of a website which sadly cannot be indexed by search engines, is rejected by directory editors and which is furthermore invisible to crawlers. The primary objective of this dissertation is to compare and report on the impact of text-based versus graphic-based hyperlinks on website visibility. The method employed in the research was to develop two e-Commerce based websites with the same functionality, contents and keywords, however utilising different navigation schemes. The one website had all hyperlinks coded in text-phrases, while the other embedded the hyperlinks in graphics. Both websites were submitted to the same search engines at the same time. A period of eight months was allowed to ensure that the websites drew sufficient ‘hits’ to enable a comparative analysis to be conducted. Two industry standard website ranking programs were used to monitor how the two websites feature in the search engine rankings. Graphs as well as text-based reports produced by the ranking programs and the t-test were used to compare and analyse the results. Evidence based on the reviewed literature indicated that there are conflicting reports on the impact of text as opposed to graphic hyperlinks on website visibility. However, there is unsupported evidence that text hyperlinks achieved higher rankings than graphics-based hyperlinks. Although the ‘human website browsers’ find a certain amount of graphical aids conducive to easier navigation, ‘search engine crawlers’ find many of these same graphic aids impossible to index. The study supported that the graphic-based website ranked higher than the text-based website, which calls for a balance to be found between these two extremes. This balance would satisfy both ‘human website browsers’ and ‘search engine crawlers’. It is posited by this author that this dissertation provides website designers with the abilities to achieve such a balance. KEYWORDS: search engines, hyperlinks, text, graphics, visibility, navigation, ecommerce, design.
46

The effect webpage body keywords location has on ranking in search engines results: an empirical study

Kritzinger, Wouter Thomas January 2005 (has links)
DISSERTATION Submitted in partial (50%) fulfilment of the requirements for the degree MAGISTER TECHNOLOGIAE in BUSINESS INFORMATION SYSTEMS in the FACULTY OF BUSINESS INFORMATICS at the CAPE PENINSULA UNIVERSITY OF TECHNOLOGY 2005 / The growth of the World Wide Web has spawned a wide collection of new information sources, which has also left users with the daunting task of determining which sources are valid. Most users rely on the web because of the low cost of information retrieval. Other advantages of the web include the convenience in terms of time and access as well as the ability to easily record results. It is also claimed that the web has evolved into a powerful business tool. Examples include highly popular business services such as Amazon.com and Kalahari.net. It is estimated that around 80% of users utilise search engines to locate information on the Internet. This of course places emphasis on the underlying importance of webpages being listed on search engines indices. It is in the interest of any company to pursue a strategy for ensuring a high search engine ranking for their e-Commerce website. This will result in more visits from users and possibly more sales. One of the strategies for ensuring a high search engine ranking is the placement of keywords in the body text section of a webpage. Empirical evidence that the placement of keywords in certain areas of the body text will have an influence on the websites’ visibility to search engines could not be found. The author set out to prove or disprove that keywords in the body text of a webpage will have a measurable effect on the visibility of a website to search engine crawlers. From the findings of this research it will be possible to create a guide for e- Commerce website authors on the usage, placing and density of keywords within their websites. This guide, although it will only focus on one aspect of search engine visibility, could help e-Commerce websites to attract more visitors and to become more profitable.
47

Development of a search engine marketing model using the application of a dual strategy

Kritzinger, Wouter Thomas January 2017 (has links)
Thesis (DTech (Informatics))--Cape Peninsula University of Technology, 2017. / Any e-commerce venture using a website as main shop-front should invest in marketing their website. Previous empirical evidence shows that most Search Engine Marketing (SEM) spending (approximately 82%) is allocated to Pay Per Click (PPC) campaigns while only 12% was spent on Search Engine Optimisation (SEO). The remaining 6% of the total spending was allocated to other SEM strategies. No empirical work was found on how marketing expenses compare when used solely for either the one or the other of the two main types of SEM. In this study, a model will be designed to guide the development of a dual SEM strategy.
48

Vyhledávání informací na internetu a jeho trendy a směry / Internet searchings trends

Bjačková, Barbora January 2013 (has links)
Internet search has changed significantly since its beginning and it has also changed the way of information retrieval. Firstly, network search tools were created. However, greater development of internet search tools came after the creation of the Web. One of the first internet search tools were the web directories, such as Yahoo! or content directory Open Directory Project. Nowadays, web search engines are the most commonly used. Apart from general web search engines, there are also specialized or web search engines for particular aim or function, such as DuckDuckGo aimed at privacy, Yandex or Seznam.cz aimed at specific region or computational search engine WolframAlpha. Multimedia search and search adapted for mobile devices is technology trend in the field of internet search. Personalization, localization and social search belong among the contemporary trends. Semantic search is another long-lasting trend.
49

Cluster-based relevance feedback techniques for web searches

Deng, Ziqiang 01 January 1998 (has links)
No description available.
50

Detecting Internet visual plagiarism in higher education photography with Google™ Search by Image : proposed upload methods and system evaluation

Van Heerden, Leanri. January 2014 (has links)
Thesis (M. Tech. (Design and Studio Art)) - Central University of Technology, Free State, 2014 / The Information Age has presented those in the discipline of photography with very many advantages. Digital photographers enjoy all the perquisites of convenience while still producing high-quality images. Lecturers find themselves the authorities of increasingly archaic knowledge in a perpetual race to keep up with technology. When inspiration becomes imitation and visual plagiarism occurs, lecturers may find themselves at a loss for taking action as content-based image retrieval systems, like Google™ Search by Image (SBI), have not yet been systematically tested for the detection of visual plagiarism. Currently there exists no efficacious method available to photography lecturers in higher education for detecting visual plagiarism. As such, the aim of this study is to ascertain the most effective uploading methods and precision of the Google™ SBI system which lecturers can use to establish a systematic workflow that will combat visual plagiarism in photography programmes. Images were selected from the Google™ Images database by means of random sampling and uploaded to Google™ SBI to determine if the system can match the images to their Internet source. Each of the images received a black and white conversion, a contrast adjustment and a hue shift to ascertain whether the system can also match altered images. Composite images were compiled to establish whether the system can detect images from the salient feature. Results were recorded and the precision values calculated to determine the system’s success rate and accuracy. The results were favourable and 93.25% of the adjusted images retrieved results with a precision value of 0.96. The composite images had a success rate of 80% when uploaded intact with no dissections and a perfect precision value of 1.00. Google™ SBI can successfully be used by the photography lecturer as a functional visual plagiarism detection system to match images unethically appropriated by students from the Internet.

Page generated in 0.0683 seconds