Spelling suggestions: "subject:"eeb search engine"" "subject:"beb search engine""
51 |
Web search engines as teaching and research resources : a perceptions survey of IT and CS staff from selected universities of the KwaZulu-Natal and Eastern Cape provinces of South AfricaTamba, Paul A. Tamba January 2011 (has links)
A dissertation submitted in fulfillment of the requirements for the degree of Master in Technology: Information Technology, Durban University of Technology, 2011. / This study examines the perceived effect of the following factors on web searching ability of academic staff in the computing discipline: demographic attributes such as gender, age group, position held by the academic staff, highest qualification, etc; lecturing experience, research experience, English language proficiency, and web searching experience. The research objectives are achieved using a Likert-scale based questionnaire for 61 academic staff from Information Technology and Computer Science departments from four Universities from the Kwazulu-Natal and Eastern Cape provinces of South Africa. Descriptive and inferential statistics were computed for data analysis from the questionnaire after performing data reliability and validity tests using factor analysis and Cronbach‟s coefficients methods on the PASW Statistics 18.0 (SPSS).
Descriptive statistics revealed a majority of staff from IT as compared to staff in CS and, a majority of under qualified middle age male staff in junior positions with considerable years of lecturing experience but with little research experience. Inferential statistics show an association between web searching ability and demographic attributes such as academic qualifications, positions, and years of research experience, and also reveal a relationship between web searching ability and lecturing experience, and between web searching ability and English language ability. However, the association between position, English language ability, and searching ability was found to be the strongest of all.
The novelty finding by this study is the effect of lecturing experience on web searching ability which has not been claimed by existing research reviewed. Ideas for future research include mentoring of academic staff by more experienced staff, training of novice web searchers, designing and using semantic search systems both in English and in local languages, publishing more web content in local languages, and triangulating various research strategies for the analysis of the usability of web search engines.
|
52 |
Learning an integrated hybrid image retrieval systemJing, Yushi 06 January 2012 (has links)
Current Web image search engines, such as Google or Bing Images, adopt a hybrid search approach in which a text-based query (e.g.
"apple") is used to retrieve a set of relevant images, which are then refined by the user (e.g. by re-ranking the retrieved images based on similarity to a selected example). This approach makes it possible to use both text information (e.g. the initial query) and image features (e.g. as part of the refinement stage) to identify images which are relevant to the user. One limitation of these current systems is that text and image features are treated as independent components and are often used in a decoupled manner.
This work proposes to develop an integrated hybrid search method which leverages the synergies between text and image features.
Recently, there has been tremendous progress in the computer vision community in learning models of visual concepts from collections of example images. While impressive performance has been achieved on standardized data sets, scaling these methods so that they are capable of working at web scale remains a significant challenge. This work will develop approaches to visual modeling that can be scaled to address the task of retrieving billions of images on the Web.
Specifically, we propose to address two research issues related to integrated text- and image-based retrieval. First, we will explore whether models of visual concepts which are learned from collections of web images can be utilized to improve the image ranking associated with a text-based query. Second, we will investigate the hypothesis that the click-patterns associated with standard web image search engines can be utilized to learn query-specific image similarity measures that support improved query-refinement performance. We will evaluate our research by constructing a prototype integrated hybrid retrieval system based on the data from 300K real-world image queries. We will conduct user-studies to evaluate the effectiveness of our learned similarity measures and quantify the benefit of our method in real world search tasks such as target search.
|
53 |
Surfing for knowledge : how undergraduate students use the internet for research and study purposes.Phillips, Genevieve. January 2013 (has links)
The developments in technology and concomitant access to the Internet have reshaped the
way people research in their personal and academic lives. The ever-expanding amount of
information on the Internet is creating an environment where users are able to find what they
seek for or add to the body of knowledge or both. Researching, especially for academic
purposes, has been greatly impacted by the Internet’s rapid growth and expansion. This
project stemmed from a desire to understand how student’s research methods have evolved
when taking into account their busy schedules and needs. The availability and accessibility of
the Internet has increased its use considerably as a straightforward medium from which users
obtain desired information. This thesis was to ascertain in what manner senior undergraduate
students at the University of Kwa-Zulu Natal Pietermaritzburg campus use the Internet for
academic research purposes which is largely determined by the individual’s personal
preference and access to the Internet. Through the relevant literature review there arose
pertinent questions that required answers. Students were interviewed to determine when, why
and how they began using the Internet, and how this usage contributes to their academic
work; whether it aids or inhibits student’s research. Through collection and analysis of data,
evidence emerged that students followed contemporary research methods, making extensive
use of the Internet, while a few use both forms of resources, unless compelled by lecturers
when following assignment requirements. As a secondary phase, from the results received
from the students, lecturers were interviewed. Differing levels of restrictions on students were
evident; they themselves use the Internet for academic research purposes. Lecturers were
convinced they had the understanding and experience to discern what was relevant and factual. Referring to the Internet for research is becoming more popular. This should continue
to increase as the student’s lives become more complex. A suggestion offered by this
research project is to academic staff. Equip students from their early University years on
standards they should follow in order to research correctly, as opposed to limiting their use of
the Internet leading in part to students committing plagiarism being unaware of the wealth of
reputable resources available for their use and benefit on the Internet. / Thesis (M.A.)-University of KwaZulu-Natal, Pietermaritzburg, 2013.
|
54 |
Building a search engine for music and audio on the World Wide WebKnopke, Ian January 2005 (has links)
The main contribution of this dissertation is a system for locating and indexing audio files on the World Wide Web. The idea behind this system is that the use of both web page and audio file analysis techniques can produce more relevant information for locating audio files on the web than is used in full-text search engines. / The most important part of this system is a web crawler that finds materials by following hyperlinks between web pages. The crawler is distributed and operates using multiple computers across a network, storing results to a database. There are two main components: a set of retrievers that retrieve pages and audio files from the web, and a central crawl manager that coordinates the retrievers and handles data storage tasks. / The crawler is designed to locate three types of audio files: AIFF, WAVE, and MPEG-1 (MP3), but other types can be easily added to the system. Once audio files are located, analyses are performed of both the audio files and the associated web pages that link to these files. Information extracted by the crawler can be used to build search indexes for resolving user queries. A set of results demonstrating aspects of the performance of the crawler are presented, as well as some statistics and points of interest regarding the nature of audio files on the web.
|
55 |
Websites are capable of reflecting a particular human temperament : fact or fad?Theron, Annatjie. January 2008 (has links)
Thesis (MIT(Informatics))--University of Pretoria, 2008. / Abstract in English and Afrikaans. Includes bibliographical references.
|
56 |
Search engine poisoning and its prevalence in modern search enginesBlaauw, Pieter January 2013 (has links)
The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
|
57 |
Search engine strategies: a model to improve website visibility for SMME websitesChambers, Rickard January 2005 (has links)
THESIS
Submitted in fulfilment
of the requirements for the degree
MAGISTER TECHNOLOGIAE
in
INFORMATION TECHNOLOGY
in the
FACULTY OF BUSINESS INFORMATICS
at the
CAPE PENINSULA UNIVERSITY OF TECHNOLOGY
2005 / The Internet has become the fastest growing technology the world has
ever seen. It also has the ability to permanently change the face of
business, including e-business. The Internet has become an important
tool required to gain potential competitiveness in the global information
environment. Companies could improve their levels of functionality and
customer satisfaction by adopting e-commerce, which ultimately could
improve their long-term profitability.
Those companies who do end up adopting the use of the Internet, often
fail to gain the advantage of providing a visible website. Research has
also shown that even though the web provides numerous opportunities,
the majority of SMMEs (small, medium and micro enterprises) are often
ill equipped to exploit the web’s commercial potential. It was determined
in this research project through the analysis of 300 websites, that only
6.3% of SMMEs in the Western Cape Province of South Africa appears
within the top 30 results of six search engines, when searching for
services/products.
This lack of ability to produce a visible website is believed to be due to
the lack of education and training, financial support and availability of
time prevalent in SMMEs. For this reason a model was developed to
facilitate the improvement of SMME website visibility.
To develop the visibility model, this research project was conducted to
identify potential elements which could provide a possible increase in
website visibility. A criteria list of these elements was used to evaluate a
sample of websites, to determine to what extent they made use of these
potential elements.
An evaluation was then conducted with 144 different SMME websites by
searching for nine individual keywords within four search engines
(Google, MSN, Yahoo, Ananzi), and using the first four results of every
keyword from every search engine for analysis. Elements gathered
through academic literature were then listed according to the usage of
these elements in the top-ranking websites when searching for
predetermined keywords. Further qualitative research was conducted to
triangulate the data gathered from the literature and the quantitative
research.
The evaluative results provided the researcher with possible elements /
designing techniques to formulate a model to develop a visible website
that is not only supported by arrant research, but also through real
current applications. The research concluded that, as time progresses
and technology improves, new ways to improve website visibility will
evolve. Furthermore, that there is no quick method for businesses to
produce a visible website as there are many aspects that should be
considered when developing “visible” websites.
|
58 |
An exploratory study of techniques in passive network telescope data analysisCowie, Bradley January 2013 (has links)
Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope datasets
|
59 |
A multi-agent collaborative personalized web mining system model.Oosthuizen, Ockmer Louren 02 June 2008 (has links)
The Internet and world wide web (WWW) have in recent years, grown exponentially in size and in terms of the volume of information that is available on it. In order to effectively deal with the huge amount of information on the web, so called web search engines have been developed for the task of retrieving useful and relevant information for its users. Unfortunately, these web search engines have not kept pace with the boom growth and commercialization of the web. The main goal of this dissertation is the development of a model for a collaborative personalized meta-search agent (COPEMSA) system for the WWW. This model will enable the personalization of web search for users. Furthermore, the model aims to leverage on current search engines on the web as well as enable collaboration between users of the search system for the purposes of sharing useful resources between them. The model also employs the use of multiple intelligent agents and web content mining techniques. This enables the model to autonomously retrieve useful information for it’s user(s) and present this information in an effective manner. In order to achieve the above stated, the COPEMSA model employs the use of multiple intelligent agents. COPEMSA consists of five core components: a user agent, a query agent, a community agent, a content mining agent and a directed web spider. The user agent learns about the user in order to introduce personal preference into user queries. The query agent is a scaled down meta-search engine with the task of submitting the personalized queries it receives from the user agent to multiple search services on theWWW. The community agent enables the search system to communicate and leverage on the search experiences of a community of searchers. The content mining agent is responsible for analysis of the retrieved results from theWWWand the presentation of these results to the system user. Finally, a directed web spider is used by the content mining agent to retrieve the actual web pages it analyzes from the WWW. In this dissertation an additional model is also presented to deal with a specific problem all web spidering software must deal with namely content and link encapsulation. / Prof. E.M. Ehlers
|
60 |
Building a search engine for music and audio on the World Wide WebKnopke, Ian January 2005 (has links)
No description available.
|
Page generated in 0.1127 seconds