91 |
A personalised query expansion approach using contextSeher, Indra. January 2007 (has links)
Thesis (Ph.D.)--University of Western Sydney, 2007. / A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy to the College of Health & Science, School of Computing and Mathematics, University of Western Sydney. Includes bibliography.
|
92 |
Search engine poisoning and its prevalence in modern search enginesBlaauw, Pieter January 2013 (has links)
The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
|
93 |
Search engine strategies: a model to improve website visibility for SMME websitesChambers, Rickard January 2005 (has links)
THESIS
Submitted in fulfilment
of the requirements for the degree
MAGISTER TECHNOLOGIAE
in
INFORMATION TECHNOLOGY
in the
FACULTY OF BUSINESS INFORMATICS
at the
CAPE PENINSULA UNIVERSITY OF TECHNOLOGY
2005 / The Internet has become the fastest growing technology the world has
ever seen. It also has the ability to permanently change the face of
business, including e-business. The Internet has become an important
tool required to gain potential competitiveness in the global information
environment. Companies could improve their levels of functionality and
customer satisfaction by adopting e-commerce, which ultimately could
improve their long-term profitability.
Those companies who do end up adopting the use of the Internet, often
fail to gain the advantage of providing a visible website. Research has
also shown that even though the web provides numerous opportunities,
the majority of SMMEs (small, medium and micro enterprises) are often
ill equipped to exploit the web’s commercial potential. It was determined
in this research project through the analysis of 300 websites, that only
6.3% of SMMEs in the Western Cape Province of South Africa appears
within the top 30 results of six search engines, when searching for
services/products.
This lack of ability to produce a visible website is believed to be due to
the lack of education and training, financial support and availability of
time prevalent in SMMEs. For this reason a model was developed to
facilitate the improvement of SMME website visibility.
To develop the visibility model, this research project was conducted to
identify potential elements which could provide a possible increase in
website visibility. A criteria list of these elements was used to evaluate a
sample of websites, to determine to what extent they made use of these
potential elements.
An evaluation was then conducted with 144 different SMME websites by
searching for nine individual keywords within four search engines
(Google, MSN, Yahoo, Ananzi), and using the first four results of every
keyword from every search engine for analysis. Elements gathered
through academic literature were then listed according to the usage of
these elements in the top-ranking websites when searching for
predetermined keywords. Further qualitative research was conducted to
triangulate the data gathered from the literature and the quantitative
research.
The evaluative results provided the researcher with possible elements /
designing techniques to formulate a model to develop a visible website
that is not only supported by arrant research, but also through real
current applications. The research concluded that, as time progresses
and technology improves, new ways to improve website visibility will
evolve. Furthermore, that there is no quick method for businesses to
produce a visible website as there are many aspects that should be
considered when developing “visible” websites.
|
94 |
An exploratory study of techniques in passive network telescope data analysisCowie, Bradley January 2013 (has links)
Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope datasets
|
95 |
A usability study of a language centre web siteMorrall, Andrew J. January 2002 (has links)
published_or_final_version / Education / Master / Master of Science in Information Technology in Education
|
96 |
Internet a analýza chování českého internetového uživatele / The Internet and analysis of a Czech Internet user behaviourKočí, Michal January 2010 (has links)
This diploma thesis describes the Internet as a medium which offers a wide spectrum of services, information, possibilities and furthermore as an omnipotent tool for searching information. This modern medium is specified, characterized, described and defined. The diploma thesis indicates what the Internet includes and what services it offers. Main concepts and options of searching and acquiring information are indicated as well. The practical part of the diploma thesis includes realization and following evaluation of a question-form investigation, which characterizes behaviour of a Czech Internet user. The goal is to prepare a suitable question-form about the behaviour of Internet users, to distribute the question-form and to evaluate and analyze the modern Czech Internet user by utilizing the resulting data and other study material.
|
97 |
The mass collaboration of human flesh search in ChinaGe, Shuai January 2011 (has links)
University of Macau / Faculty of Social Sciences and Humanities / Department of Communication
|
98 |
Discovering and Tracking Interesting Web ServicesRocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description.
To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping.
Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML.
Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
|
99 |
Novel computationally intelligent machine learning algorithms for data mining and knowledge discoveryGheyas, Iffat A. January 2009 (has links)
This thesis addresses three major issues in data mining regarding feature subset selection in large dimensionality domains, plausible reconstruction of incomplete data in cross-sectional applications, and forecasting univariate time series. For the automated selection of an optimal subset of features in real time, we present an improved hybrid algorithm: SAGA. SAGA combines the ability to avoid being trapped in local minima of Simulated Annealing with the very high convergence rate of the crossover operator of Genetic Algorithms, the strong local search ability of greedy algorithms and the high computational efficiency of generalized regression neural networks (GRNN). For imputing missing values and forecasting univariate time series, we propose a homogeneous neural network ensemble. The proposed ensemble consists of a committee of Generalized Regression Neural Networks (GRNNs) trained on different subsets of features generated by SAGA and the predictions of base classifiers are combined by a fusion rule. This approach makes it possible to discover all important interrelations between the values of the target variable and the input features. The proposed ensemble scheme has two innovative features which make it stand out amongst ensemble learning algorithms: (1) the ensemble makeup is optimized automatically by SAGA; and (2) GRNN is used for both base classifiers and the top level combiner classifier. Because of GRNN, the proposed ensemble is a dynamic weighting scheme. This is in contrast to the existing ensemble approaches which belong to the simple voting and static weighting strategy. The basic idea of the dynamic weighting procedure is to give a higher reliability weight to those scenarios that are similar to the new ones. The simulation results demonstrate the validity of the proposed ensemble model.
|
100 |
An investigation into the web searching strategies used by postgraduate students at the University of KwaZulu-Natal, Pietermaritzburg campus.Civilcharran, Surika. 01 November 2013 (has links)
The purpose of this mixed methods study was to investigate the Web search strategies used to
retrieve information from the Web by postgraduate students at the University of KwaZulu-Natal,
Pietermaritzburg campus in order to address the weaknesses of undergraduate students with regard
to their Web searching strategies. The study attempted to determine the Web search tactics used by
postgraduate students, the Web search strategies (i.e. combinations of tactics) they used, how they
determined whether their searches were successful and the search tool they preferred. In addition,
the study attempted to contribute toward building a set of best practices when searching the Web.
The sample population consisted of 331 postgraduate students, yielding a response rate of 95%.
The study involved a two-phased approach adopting a survey in Phase 1 and interviews in the
Phase 2. Proportionate stratified random sampling was used and the population was divided into
five mutually exclusive groups (i.e., postgraduate diploma, postgraduate certificate, Honours,
Master’s and PhD). A pre-test was conducted with ten postgraduate students from the
Pietermaritzburg campus. The study revealed that the majority of postgraduate students have been
searching the Web for six years or longer and that most postgraduate students searched the Web for
information from five to less than ten hours a week. Most respondents gained their knowledge on
Web searching through experience and only a quarter of the respondents have been given formal
training on Web searching. The Web searching strategies explored contribute to the best practices
with regard to Web search strategies, as interviewees were selected based on the highest number of
search tactics used and they have several years of searching experience. The study was also able to
identify the most preferred Web search tool. It is envisaged that undergraduate students can
potentially follow these search strategies to improve their information retrieval. This finding could
also be beneficial to librarians in developing training modules that assist undergraduate students to
use these Web search tools more efficiently. The final outcome of the study was an adaptation
Bates’ (1979) model of Information Search Tactics to suit information searching on the Web. / Thesis (M.Com.)-University of KwaZulu-Natal, Pietermaritzburg, 2012.
|
Page generated in 0.0869 seconds