• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 2
  • 2
  • 1
  • Tagged with
  • 24
  • 19
  • 13
  • 13
  • 12
  • 8
  • 7
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An implementation of a DNS-based malware detection system

Fors, Markus, Grahn, Christian January 2010 (has links)
Today’s wide usage of the Internet makes malicious software (malware) and botnets a big problem. While anti-virus software is commonplace today, malware is constantly evolving to remain undetected. Passively monitoring DNS traffic on a network can present a platform for detecting malware on multiple computers at a low cost and low complexity. To explore this avenue for detecting malware we decided it was necessary to design an extensible system where the framework was separate from the actual detection methods. We wanted to divide the system into three parts, one for logging, one for handling modules for detection and one for taking action against suspect traffic. The system we implemented in C collects DNS traffic and processes it with modules that are compiled separately and can be plugged in or out during runtime. Two proof of concept modules have been implemented. One based on a blacklist and one based on geolocation of requested servers. The system is complete to the point of being ready for field testing and implementation of more advanced detection modules.
22

Exploring Privacy Risks in Information Networks / Att utforska risker mot personlig integritet i informationsnätverk

Jacobsson, Andreas January 2004 (has links)
Exploring privacy risks in information networks is analysing the dangers and hazards that are related to personal information about users of a network. It is about investigating the dynamics and complexities of a setting where humans are served by technology in order to exploit the network for their own good. In the information network, malicious activities are motivated by commercial factors in that the attacks to privacy are happening, not in the name of national security, but in the name of the free market together with technological advancements. Based on the assumption of Machiavellian Intelligence, we have modelled our analyses by way of concepts such as Arms Race, Tragedy of the Commons, and the Red Queen effect. In a number of experiments on spam, adware, and spyware, we have found that they match the characteristics of privacy-invasive software, i.e., software that ignores users’ right to decide what, how and when information about themselves is disseminated by others. Spam messages and adware programs suggest a hazard in that they exploit the lives of millions and millions of users with unsolicited commercial and/or political content. Although, in reality spam and adware are rather benign forms of a privacy risks, since they, e.g., do not collect and/or transmit user data to third parties. Spyware programs are more serious forms of privacy risks. These programs are usually bundled with, e.g., file-sharing tools that allow a spyware to secretly infiltrate computers in order to collect and distribute, e.g., personal information and data about the computer to profit-driven third parties on the Internet. In return, adware and spam displaying customised advertisements and offers may be distributed to vast amounts of users. Spyware programs also have the capability of retrieving malicious code, which can make the spyware act like a virus when the file-sharing tools are distributed in-between the users of a network. In conclusion, spam, spyware and virulent programs invade user privacy. However, our experiments also indicate that privacy-invasive software inflicts the security, stability and capacity of computerised systems and networks. Furthermore, we propose a description of the risk environment in information networks, where network contaminants (such as spam, spyware and virulent programs) are put in a context (information ecosystem) and dynamically modelled by their characteristics both individually and as a group. We show that network contamination may be a serious threat to the future prosperity of an information ecosystem. It is therefore strongly recommended to network owners and designers to respect the privacy rights of individuals. Privacy risks have the potential to overthrow the positive aspects of belonging to an information network. In a sound information network the flow of personal information is balanced with the advantages of belonging to the network. With an understanding of the privacy risk environment, there is a good starting-point for recognising and preventing intrusions into matters of a personal nature. In reflect, mitigating privacy risks contributes to a secure and efficient use of information networks.
23

Search engine poisoning and its prevalence in modern search engines

Blaauw, Pieter January 2013 (has links)
The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
24

Detection of Spyware by Mining Executable Files

Haider, Syed Imran, Shahzad, Raja M. Khurram January 2009 (has links)
Malicious programs have been a serious threat for the confidentiality, integrity and availability of a system. Different researches have been done to detect them. Two approaches have been derived for it i.e. Signature Based Detection and Heuristic Based Detection. These approaches performed well against known malicious programs but cannot catch the new malicious programs. Different researchers tried to find new ways of detecting malicious programs. The application of data mining and machine learning is one of them and has shown good results compared to other approaches. A new category of malicious programs has gained momentum and it is called Spyware. Spyware are more dangerous for confidentiality of private data of the user of system. They may collect the data and send it to third party. Traditional techniques have not performed well in detecting Spyware. So there is a need to find new ways for the detection of Spyware. Data mining and machine learning have shown promising results in the detection of other malicious programs but it has not been used for detection of Spyware yet. We decided to employ data mining for the detection of spyware. We used a data set of 137 files which contains 119 benign files and 18 Spyware files. A theoretical taxonomy of Spyware is created but for the experiment only two classes, Benign and Spyware, are used. An application Binary Feature Extractor have been developed which extract features, called n-grams, of different sizes on the basis of common feature-based and frequency-based approaches. The number of features were reduced and used to create an ARFF file. The ARFF file is used as input to WEKA for applying machine learning algorithms. The algorithms used in the experiment are: J48, Random Forest, JRip, SMO, and Naive Bayes. 10-fold cross-validation and the area under ROC curve is used for the evaluation of classifier performance. We performed experiments on three different n-gram sizes, i.e.: 4, 5, 6. Results have shown that extraction of common feature approach has produced better results than others. We achieved an overall accuracy of 90.5 % with an n-gram size of 6 from the J48 classifier. The maximum area under ROC achieved was 83.3 % with Random Forest. / +46709325761, +46762782550

Page generated in 0.0213 seconds