• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 29
  • 21
  • 15
  • 11
  • 9
  • 8
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 208
  • 83
  • 51
  • 42
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • 22
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Using Artificial Neural Networks to Identify Image Spam

Hope, Priscilla 02 September 2008 (has links)
No description available.
62

Une nouvelle approche pour la détection des spams se basant sur un traitement des données catégorielles

Parakh Ousman, Yassine Zaralahy January 2012 (has links)
Le problème des spams connaît depuis ces 20 dernières années un essor considérable. En effet, le pollupostage pourrait représenter plus de 72% de l'ensemble du trafic de courrier électronique. Au-delà de l'aspect intrusif des spams, ceux-ci peuvent comporter des virus ou des scripts néfastes ; d'où l'intérêt de les détecter afin de les supprimer.Le coût d'un envoi de courriels par un spammeur étant infime, ce dernier peut se permettre de transmettre le spam au plus d'adresse de messagerie électronique. Pour le spammeur qui arrive à récupérer même une petite partie d'utilisateurs, son opération devient commercialement viable. Imaginant un million de courriels envoyés et seul 0,1% de personnes qui se font appâtées [i.e. appâter], cela représente tout de même 1 millier de personnes ; et ce chiffre est très réaliste. Nous voyons que derrière la protection de la vie privée et le maintien d'un environnement de travail sain se cachent également des enjeux économiques. La détection des spams est une course constante entre la mise en place de nouvelles techniques de classification du courriel et le contournement de celles-ci par les spammeurs. Jusqu'alors, ces derniers avaient une avance dans cette lutte. Cette tendance s'est inversée avec l'apparition de techniques basées sur le filtrage du contenu. Ces filtres pour la plupart sont basés sur un classificateur bayésien naïf. Nous présentons dans ce mémoire une approche nouvelle de cette classification en utilisant une méthode basée sur le traitement de données catégorielles. Cette méthode utilise les N-grams pour identifier les motifs significatifs afin de limiter l'impact du morphisme des courriers indésirables.
63

Design and Evaluation of a New Authentication Mechanism for Validating the Sender of an Email

Sakamuri, Sai 01 March 2005 (has links)
A new authentication mechanism for validating the source of messages over the Internet is designed and evaluated. This mechanism is applied to email and is called Email++. Email++ prevents identity forging (spoofing) and tampering of email contents. By preventing identity forging, Email++ can reduce the amount of spam received and limit the spread of viruses like Melissa, Love Bug, Bagle Worm, and Killer Resume. Email++ validates both the sender and the receiver of an email by confirming the senders identity with the domain mail server that delivered the email for the sender, and authenticates the receiver with hash value comparisons. Email++ enables payment mechanisms, including micro-cash, and challenge response schemes that use puzzle solving. MD5 hash signatures generated both at the sender and the receiver locations are used for validating the senders identity and for making email tamper resistant in the network. An out-of-band TCP connection established between the sender and the receiver is used as a communication channel for validating the sender as well as the senders email server. The information needed for establishing an out-of-band TCP connection is obtained by querying the DNS (Domain Naming System), instead of using email headers from the received mail, which are susceptible to spoofing. The Email++ technique is compared with existing anti spam and anti-spoof techniques like SPF, Yahoo Domain Keys, Microsoft Sender ID, TEOS and PGP. The Email++ specification is evaluated by developing both Email++ client and Email++ server programs in C language and using Sendmail 8.12 as the mail server. The performance of Email++ is compared with standard SMTP protocol implementation of Sendmail 8.12. Several factors are considered in evaluating the performance. CPU demand, memory demand, bandwidth demand, email latency, and extra DNS load are measured for both email sender and the receiver. The performance evaluation results show that Email++ adds an extra CPU demand of about 11%. The extra memory required by Email++ is nearly 3%. The bandwidth demand of Email++ is around 15% greater than the standard SMTP for sending 500 emails of 3.5KB each. Extra load on DNS increases by one connection for every incoming mail at the receiver.
64

Scavenger: A Junk Mail Classification Program

Malkhare, Rohan V 20 January 2003 (has links)
The problem of junk mail, also called spam, has reached epic proportions and various efforts are underway to fight spam. Junk mail classification using machine learning techniques is a key method to fight spam. We have devised a machine learning algorithm where features are created from individual sentences in the subject and body of a message by forming all possible word-pairings from a sentence. Weights are assigned to the features based on the strength of their predictive capabilities for spam/legitimate determination. The predictive capabilities are estimated by the frequency of occurrence of the feature in spam/legitimate collections as well as by application of heuristic rules. During classification, total spam and legitimate evidence in the message is obtained by summing up the weights of extracted features of each class and the message is classified into whichever class accumulates the greater sum. We compared the algorithm against the popular naïve-bayes algorithm (in [8]) and found it's performance exceeded that of naïve-bayes algorithm both in terms of catching spam and for reducing false positives.
65

Cache-based vulnerabilities and spam analysis

Neve de Mevergnies, Michael 14 July 2006 (has links)
Two problems of computer security are investigated. On one hand, we are facing a practical problematic of actual processors: the cache, an element of the architecture that brings flexibility and allows efficient utilization of the resources, is demonstrated to open security breaches from which secret information can be extracted. This issue required a delicate study to understand the problem and the role of the incriminated elements, to discover the potential of the attacks and find effective countermeasures. Because of the intricate behavior of a processor and limited resources of the cache, it is extremely hard to write constant-time software. This is particularly true with cryptographic applications that often rely on large precomputed data and pseudo-random accesses. The principle of time-driven attacks is to analyze the overall execution time of a cryptographic process and extract timing profiles. We show that in the case of AES those profiles are dependent on the memory lookups, i.e. the addition of the plaintext and the secret key. Correlations between some profiles with known inputs and some with partially unknown ones (known plaintext but unknown secret key) lead to the recovery of the secret key. We then detail access-driven attacks: another kind of cache-based side channel. This case relies on stronger assumptions regarding the attacker's capacities: he must be able to run another process, concurrent to the security process. Even if the security policies prevent the so-called "spy" process from accessing directly the data of the "crypto" process, the cache is shared between them and its behavior can lead the spy process to deduce the secrets of the crypto process. Several ways are explored for mitigations, depending on the security level to reach and on the attacker's capabilities. The respective performances of the mitigations are given. The scope is however oriented toward software mitigations as they can be directly applied to patch programs and reduce the cache leakage. On the other hand, we tackle a situation of computer science that also concerns many people and where important economical aspects are at stake: although spam is often considered as the other side of the Internet coin, we believe that it can be defeated and avoided. A increasing number of researches for example explores the ways cryptographic techniques can prevent spams from being spread. We concentrated on studying the behavior of the spammers to understand how e-mail addresses can be prevented from being gathered. The motivation for this work was to produce and make available quantitative results to efficiently prevent spam, as well as to provide a better understanding of the behavior of spammers. Even if orthogonal, both parts tackle practical problems and their results can be directly applied.
66

Tamper-Resilient Methods for Web-Based Open Systems

Caverlee, James 05 July 2007 (has links)
The Web and Web-based open systems are characterized by their massive amount of data and services for leveraging this data. These systems are noted for their open and unregulated nature, self-supervision, and high degree of dynamism, which are key features in supporting a rich set of opportunities for information sharing, discovery, and commerce. But these open and self-managing features also carry risks and raise growing concerns over the security and privacy of these systems, including issues like spam, denial-of-service, and impersonated digital identities. Our focus in this thesis is on the design, implementation, and analysis of large-scale Web-based open systems, with an eye toward enabling new avenues of information discovery and ensuring robustness in the presence of malicious participants. We identify three classes of vulnerabilities that threaten these systems: vulnerabilities in link-based search services, vulnerabilities in reputation-based trust services over online communities, and vulnerabilities in Web categorization and integration services. This thesis introduces a suite of methods for increasing the tamper-resilience of Web-based open systems in the face of a large and growing number of threats. We make three unique contributions: First, we present a source-centric architecture and a set of techniques for providing tamper-resilient link analysis of the World Wide Web. We propose the concept of link credibility and present a credibility-based link analysis model. We show that these approaches significantly reduce the impact of malicious spammers on Web rankings. Second, we develop a social network trust aggregation framework for supporting tamper-resilient trust establishment in online social networks. These community-based social networking systems are already extremely important and growing rapidly. We show that our trust framework support high quality information discovery and is robust to the presence of malicious participants in the social network. Finally, we introduce a set of techniques for reducing the opportunities of attackers to corrupt Web-based categorization and integration services, which are especially important for organizing and making accessible the large body of Web-enabled databases on the Deep Web that are beyond the reach of traditional Web search engines. We show that these techniques reduce the impact of poor quality or intentionally misleading resources and support personalized Web resource discovery.
67

A Spam Filter Based on Rough Sets Theory

Tzeng, Mo-yi 26 July 2005 (has links)
With the popularization of Internet and the wide use of electronic mails, the number of spam mails grows continuously. The matter has made e-mail users feel inconvenient. If e-mail servers can be integrated with data mining and artificial intelligence techniques and learn spam rules and filter out spam mails automatically, they will help every person who is bothered by spam mails to enjoy a clear e-mail environment. In this research, we propose an architecture called union defense to oppose against the spread of spam mails. Under the architecture, we need a rule-based data mining and artificial intelligence algorithm. Rough sets theory will be a good choice. Rough sets theory was proposed by Palwak, a logician living in Poland. It is a rule-based data mining and artificial intelligence algorithm and suitable to find the potential knowledge of inexact and incomplete data out. This research developed a spam filter based on rough sets theory. It can search for the characteristic rules of spam mails and can use these rules to filter out spam mails. This system set up by this research can be appended to most of existing e-mail servers. Besides, the system support Chinese, Japanese and Korean character sets and overcome the problem that most spam filters only can deal with English mails. We can develop a rule exchange approach between e-mail servers in the future works to realize union defense.
68

A Spam Filter Based on Reinforcement and Collaboration

Yang, Chih-Chin 07 August 2008 (has links)
Growing volume of spam mails have not only decreased the productivity of people but also become a security threat on the Internet. Mail servers should have abilities to filter out spam mails which change time by time precisely and manage increasing spam rules which generated by mail servers automatically and effectively. Most paper only focused on single aspect (especially for spam rule generation) to prevent spam mail. However, in real word, spam prevention is not just applying data mining algorithm for rule generation. To filter out spam mails correctly in a real world, there are still many issues should be considered in addition to spam rule generation. In this paper, we integrate three modules to form a complete anti-spam system, they are spam rule generation module, spam rule reinforcement module and spam rule exchange module. In this paper, rule-based data mining approach is used to generate exchangeable spam rules. The feedback of user¡¦s returns is reinforced spam rule. The distributing spam rules are exchanged through machine-readable XML format. The results of experiment draw the following conclusion: (1) The spam filter can filter out the Chinese mails by analyzing the header characteristics. (2) Rules exchanged among mail improve the spam recall and accuracy of mail servers. (3) Rules reinforced improve the effectiveness of spam rule.
69

To deceive the receiver : A genre analysis of the electronic variety of Nigerian scam letters

Bredesjö Budge, Susanne January 2006 (has links)
<p>This essay analyses fifty electronic Nigerian scam letters or spam in order to find out whether they can be considered a genre of their own according to Swales’ (1990) definition. It is comparing the Nigerian scam letters to sales promotion letters, as presented by Bhatia (1993). The functional moves Bhatia (1993) found in the sales promotion letters were applied to the Nigerian scam letters, and three functional moves unique for the scam letters were established. The functional moves specific to the Nigerian scam letters together with the scam letters’ compatibility with Swales’ (1990) definition of genre, give support for this essay’s argument that the Nigerian scam letters constitute a genre of their own.</p>
70

E-Mail für Dich - Lust oder Frust?

Richter, Frank, Sontag, Ralph 20 June 2003 (has links) (PDF)
Genervte Nutzer, verunsicherte Admins, panische Mailserver: Die täglich zu Tausenden eintreffenden unerwünschten Mails - Spam - gefährden die Mailinfrastruktur.Der Vortrag wird Vor- und Nachsorgemöglichkeiten für geplagte Nutzer und Administratoren erläutern. Techniken der Spamerkennung werden vorgestellt.

Page generated in 0.0365 seconds