• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 29
  • 21
  • 15
  • 11
  • 9
  • 8
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 208
  • 83
  • 51
  • 42
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • 22
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Measuring, fingerprinting and catching click-spam in ad networks

Dave, Vacha Rajendra 11 July 2014 (has links)
Advertising plays a vital role in supporting free websites and smart- phone apps. Click-spam, i.e., fraudulent or invalid clicks on online ads where the user has no actual interest in the advertiser's site, results in advertising revenue being misappropriated by Click-spammers. This revenue also funds malware authors through adware and malware crafted specifically for click-spammers. While some ad networks take active measures to block Click-spam today, the effectiveness of these measures is largely unknown, as they practice security-through-obscurity for fear of malicious parties reverse-engineering their systems. Moreover, advertisers and third parties have no way of independently estimating or defending against Click-spam. This work addresses the click-spam problem in three ways. It proposes the first methodology for advertisers to independently measure Click-spam rates on their ads. Using real world data collected from ten ad networks, it validates the method to identify and perform in-depth analysis on seven ongoing Click-spam attacks not currently caught by major ad networks, high- lighting the severity of Click-spam. Next, it exposes the state of Click-spam defenses by identifying twenty attack signatures that mimic Click-spam attacks in the wild (from Botnets, PTC sites, scripts) that can be easily detected by ad networks, and implements these attacks, and shows that none of the ad networks protect against all the attacks. This also shows that it's possible to reverse engineer click-fraud rules employed by ad networks in spite of the security-through-obscurity practices prominent today. Finally, it shows that it is not just possible, but also desirable to create Click-spam algorithms that do not rely on security-through-obscurity but instead on invariants that are hard for click-spammers to defeat, as such algorithms are inherently more robust and can catch a wide variety of click-fraud attacks. / text
42

Personal Email Spam Filtering with Minimal User Interaction

Mojdeh, Mona January 2012 (has links)
This thesis investigates ways to reduce or eliminate the necessity of user input to learning-based personal email spam filters. Personal spam filters have been shown in previous studies to yield superior effectiveness, at the cost of requiring extensive user training which may be burdensome or impossible. This work describes new approaches to solve the problem of building a personal spam filter that requires minimal user feedback. An initial study investigates how well a personal filter can learn from different sources of data, as opposed to user’s messages. Our initial studies show that inter-user training yields substantially inferior results to intra-user training using the best known methods. Moreover, contrary to previous literature, it is found that transfer learning degrades the performance of spam filters when the source of training and test sets belong to two different users or different times. We also adapt and modify a graph-based semi-supervising learning algorithm to build a filter that can classify an entire inbox trained on twenty or fewer user judgments. Our experiments show that this approach compares well with previous techniques when trained on as few as two training examples. We also present the toolkit we developed to perform privacy-preserving user studies on spam filters. This toolkit allows researchers to evaluate any spam filter that conforms to a standard interface defined by TREC, on real users’ email boxes. Researchers have access only to the TREC-style result file, and not to any content of a user’s email stream. To eliminate the necessity of feedback from the user, we build a personal autonomous filter that learns exclusively on the result of a global spam filter. Our laboratory experiments show that learning filters with no user input can substantially improve the results of open-source and industry-leading commercial filters that employ no user-specific training. We use our toolkit to validate the performance of the autonomous filter in a user study.
43

Automatic identification and removal of low quality online information

Webb, Steve 17 November 2008 (has links)
The advent of the Internet has generated a proliferation of online information-rich environments, which provide information consumers with an unprecedented amount of freely available information. However, the openness of these environments has also made them vulnerable to a new class of attacks called Denial of Information (DoI) attacks. Attackers launch these attacks by deliberately inserting low quality information into information-rich environments to promote that information or to deny access to high quality information. These attacks directly threaten the usefulness and dependability of online information-rich environments, and as a result, an important research question is how to automatically identify and remove this low quality information from these environments. The first contribution of this thesis research is a set of techniques for automatically recognizing and countering various forms of DoI attacks in email systems. We develop a new DoI attack based on camouflaged messages, and we show that spam producers and information consumers are entrenched in a spam arms race. To break free of this arms race, we propose two solutions. One solution involves refining the statistical learning process by associating disproportionate weights to spam and legitimate features, and the other solution leverages the existence of non-textual email features (e.g., URLs) to make the classification process more resilient against attacks. The second contribution of this thesis is a framework for collecting, analyzing, and classifying examples of DoI attacks in the World Wide Web. We propose a fully automatic Web spam collection technique and use it to create the Webb Spam Corpus -- a first-of-its-kind, large-scale, and publicly available Web spam data set. Then, we perform the first large-scale characterization of Web spam using content and HTTP session analysis. Next, we present a lightweight, predictive approach to Web spam classification that relies exclusively on HTTP session information. The final contribution of this thesis research is a collection of techniques that detect and help prevent DoI attacks within social environments. First, we provide detailed descriptions for each of these attacks. Then, we propose a novel technique for capturing examples of social spam, and we use our collected data to perform the first characterization of social spammers and their behaviors.
44

Comparing the relative efficacy of phishing emails / Jämförelse av phishing emails relativa effektivitet

Lingaas Türk, Jakob January 2020 (has links)
This study aimed to examine if there was a difference in how likely a victim is to click on a phishing email’s links based on the content of the email, the tone and language used and the structure of the code. This likelihood also includes the email’s ability to bypass spam filters.  Method: The method used to examine this was a simulated phishing attack. Six different phishing templates were created and sent out via the Gophish framework to target groups of students (from Halmstad University), from a randomized pool of 20.000 users. The phishing emails contained a link to a landing page (hosted via a virtual machine) which tracked user status. The templates were: Covid19 Pre-Attempt, Spotify Friendly CSS, Spotify Friendly Button, Spotify Aggressive CSS, Spotify Aggressive Button, Student Union. Results: Covid19 Pre-Attempt: 72.6% initial spam filter evasion, 45.8% spam filter evasion, 4% emails opened and 100% links clicked. Spotify Friendly CSS: 50% initial spam filter evasion, 38% spam filter evasion, 26.3% emails opened and 0% links clicked. Spotify Friendly Button: 59% initial spam filter evasion, 28.8% spam filter evasion, 5.8% emails opened and 0 %links clicked. Spotify Aggressive CSS: 50% initial spam filter evasion, 38% spam filter evasion, 10.5% emails opened, and 100% links clicked. Spotify Aggressive Button: 16% initial spam filter evasion, 25% spam filter evasion, 0% emails opened and 0% emails clicked. Student Union: 40% initial spam filter evasion, 75% spam filter evasion, 33.3% emails opened and 100% links clicked. Conclusion: Differently structured emails have different capabilities for bypassing spam filters and for deceiving users. Language and tone appears to affect phishing email efficacy; the results suggest that an aggressive and authoritative tone heightens a phishing email’s ability to deceive users, but seems to not affect its ability to bypass spam filters to a similar degree. Authenticity appears to affect email efficacy; the results showed a difference in deception efficacy if an email was structured like that of a genuine sender. Appealing to emotions such as stress and fear appears to increase the phishing email’s efficacy in deceiving a user. / Syftet med denna studie var att undersöka om det fanns en skillnad i hur troligt det är att ett offer klickar på länkarna till ett phishing-e-postmeddelande, baserat på innehållet i e-postmeddelandet, tonen och språket som används och kodens struktur. Denna sannolikhet inkluderar även e-postens förmåga att kringgå skräppostfilter. Metod: Metoden som användes var en simulerad phishing-attack. Sex olika phishing-mallar skapades och skickades ut via Gophish-ramverket till målgruppen bestående av studenter (från Halmstads universitet), från en slumpmässig pool med 20 000 användare. Phishing-e-postmeddelandena innehöll en länk till en målsida (hostad via en virtuell maskin) som spårade användarstatus. Mallarna var: Covid19 Pre-Attempt, Spotify Friendly CSS, Spotify Friendly Button, Spotify Aggressive CSS, Spotify Aggressive Button, Student Union. Resultat: Covid19 förförsök: 72,6% kringgick det primära spamfiltret, 45,8% kringgick det sekundära spamfiltret, 4% e-postmeddelanden öppnade och 100% länkar klickade Spotify Friendly CSS: 50% kringgick det primära spamfiltret, 38% kringgick det sekundära spamfiltret, 26,3% e-postmeddelanden öppnade och 0% länkar klickade. Spotify Friendly Button: 59% kringgick det primära spamfiltret, 28,8% kringgick det sekundära spamfiltret, 5.8% e-postmeddelanden öppnade och 0% länkar klickade. Spotify Aggressive CSS: 50% kringgick det primära spamfiltret, 38% kringgick det sekundära spamfiltret, 10,5% e-post öppnade och 100% länkar klickade. Spotify Aggressive Button: 16% kringgick det primära spamfiltret, 25% kringgick det sekundära spamfiltret, 0% e-postmeddelanden öppnade och 0% e-postmeddelanden klickade. Studentkåren: 40% kringgick det primära spamfiltret, 75% kringgick det sekundära spamfiltret, 33,3% e-postmeddelanden öppnade och 100% länkar klickade. Slutsats: Olika strukturerade e-postmeddelanden har olika funktioner för att kringgå skräppostfilter och för att lura användare. Språk och ton tycks påverka effektiviteten för epost-phishing. Resultaten tyder på att en aggressiv och auktoritär ton ökar phishing-epostmeddelandets förmåga att lura användare, men verkar inte påverka dess förmåga att kringgå skräppostfilter i motsvarande grad. Autenticitet verkar påverka e-postens effektivitet, då resultaten visade en skillnad i effektivitet om ett e-postmeddelande var strukturerat som en äkta avsändare. Att adressera känslor som stress och rädsla verkar öka phishing-e-postens effektivitet när det gäller att lura en användare.
45

Detecting spam relays by SMTP traffic characteristics using an autonomous detection system

Wu, Hao January 2011 (has links)
Spam emails are flooding the Internet. Research to prevent spam is an ongoing concern. SMTP traffic was collected from different sources in real networks and analyzed to determine the difference regarding SMTP traffic characteristics of legitimate email clients, legitimate email servers and spam relays. It is found that SMTP traffic from legitimate sites and non-legitimate sites are different and could be distinguished from each other. Some methods, which are based on analyzing SMTP traffic characteristics, were purposed to identify spam relays in the network in this thesis. An autonomous combination system, in which machine learning technologies were employed, was developed to identify spam relays in this thesis. This system identifies spam relays in real time before spam emails get to an end user by using SMTP traffic characteristics never involving email real content. A series of tests were conducted to evaluate the performance of this system. And results show that the system can identify spam relays with a high spam relay detection rate and an acceptable ratio of false positive errors.
46

A study on combating the problem of unsolicited electronic messages inHong Kong

Cheung, Pak-to, Patrick., 張伯陶. January 2007 (has links)
published_or_final_version / abstract / Public Administration / Master / Master of Public Administration
47

A NOVEL SPAM CAMPAIGN IN ONLINE SOCIAL NETWORKS

Zhen, Yufeng 26 November 2013 (has links)
The increasing popularity of the Online Social Networks (OSNs)\nomenclature{$OSNs$}{Online Social Networks} has made the OSNs major targets of spammers. They aim to illegally gather private information from users and spread spam to them. In this paper, we propose a new spam campaign that includes following key steps: creating fake accounts, picking legitimate accounts, forming friendships, earning trust, and spreading spam. The unique part in our spam campaign is the process of earning trust. By using social bots, we significantly lower the cost of earning trust and make it feasible in the real world. By spreading spam at a relatively low speed, we make the life span of our fake accounts much longer than in traditional spam campaigns. This means the trust we have earned can be used multiple times instead of only one time in traditional spam campaigns. We evaluate our spam campaign by comparing it with the traditional campaign, and the comparison shows that our spam campaign is less detectable and more efficient than the traditional campaign.
48

Skräppost eller skinka? : En jämförande studie av övervakade maskininlärningsalgoritmer för spam och ham e-mailklassifikation / Spam or ham? : A comparative study of monitored machine learning algorithms for spam and ham e-mail classification.

Bergens, Simon, Frykengård, Pontus January 2019 (has links)
Spam messages in the form of e-mail is a growing problem in today's businesses. It is a problem that costs time and resources to counteract. Research into this has been done to produce techniques and tools aimed at addressing the growing number on incoming spam e-mails. The research on different algorithms and their ability to classify e-mail messages needs an update since both tools and spam e-mails have become more advanced. In this study, three different machine learning algorithms have been evaluated based on their ability to correctly classify e-mails as legitimate or spam. These algorithms are naive Bayes, support vector machine and decision tree. The algorithms are tested in an experiment with the Enron spam dataset and are then compared against each other in their performance. The result of the experiment was that support vector machine is the algorithm that correctly classified most of the data points. Even though support vector machine has the largest percentage of correctly classified data points, other algorithms can be useful from a business perspective depending on the task and context.
49

Contribution au classement statistique mutualisé de messages électroniques (spam)

Martins Da Cruz, José Márcio 13 October 2011 (has links) (PDF)
Depuis la fin des années 90, les différentes méthodes issues de l'apprentissage artificiel ont été étudiées et appliquées au problème de classement de messages électroniques (filtrage de spam), avec des résultats très bons, mais pas parfaits. Il a toujours été considéré que ces méthodes étaient adaptées aux solutions de filtrage orientées vers un seul destinataire et non pas au classement des messages d'une communauté entière. Dans cette thèse notre démarche a été, d'abord, de chercher à mieux comprendre les caractéristiques des données manipulées, à l'aide de corpus réels de messages, avant de proposer des nouveaux algorithmes. Puis, nous avons utilisé un classificateur à régression logistique avec de l'apprentissage actif en ligne - pour démontrer empiriquement qu'avec un algorithme simple et une configuration d'apprentissage mieux adaptée au contexte réel de classement, on peut obtenir des résultats aussi bons que ceux que l'on obtient avec des algorithmes plus complexes. Nous avons aussi démontré, avec des ensembles de messages d'un petit groupe d'utilisateurs, que la perte d'efficacité peut ne pas être significative dans un contexte de classement mutualisé.
50

Policy-controlled email services

Kaushik, Saket. January 2007 (has links)
Thesis (Ph. D.)--George Mason University, 2007. / Title from PDF t.p. (viewed Jan. 18, 2008). Thesis directors: Paul Amman, Duminda Wijesekera. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Vita: p. 198. Includes bibliographical references (p. 189-197). Also available in print.

Page generated in 0.0165 seconds