• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 29
  • 21
  • 15
  • 11
  • 9
  • 8
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 208
  • 83
  • 51
  • 42
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • 22
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Umělé imunitní systémy pro detekci spamů / Artificial Immune Systems for Spam Detection

Hohn, Michal January 2011 (has links)
This work deals with creating a hybrid system based on the aggregation of artificial immune system with appropriate heuristics to make the most effective spam detection. This work describes the main principles of biological and artificial immune system and conventional techniques to detect spam including several classifiers. The developed system is tested using well known database corpuses and a comparison of the final experiments is made.
162

Workshop Mensch-Computer-Vernetzung

Hübner, Uwe 15 October 2003 (has links)
Workshop Mensch-Computer-Vernetzung vom 14.-17. April 2003 in Löbsal (bei Meißen)
163

Netz- und Service-Infrastrukturen

Hübner, Uwe 21 May 2004 (has links)
Workshop "Netz- und Service-Infrastrukturen" vom 19.-22. April 2004 in Löbsal (bei Meißen)
164

An Empirical Assessment of the CAN SPAM Act

Kigerl, Alex Conrad 01 January 2010 (has links)
In January 2004, the United States Congress passed and put into effect the Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN SPAM). The Act was set forth to regulate bulk commercial email (spam) and set the limits for what was acceptable. Various sources have since investigated and speculated on the efficacy of the CAN SPAM Act, few of which report a desirable outcome for users of electronic mail. Despite the apparent consensus of anti-spam firms and the community of email users that the Act was less than effective, there is little to no research on the efficacy of the Act that utilizes any significant statistical rigor or accepted scientific practices. The present study seeks to determine what, if any, impact the CAN SPAM act had on spam messages, to identify areas of improvement to help fight spam that is both fraudulent and dangerous. The data consisted of 2,071,965 spam emails sent between February 1, 1998 and December 31, 2008. The data were aggregated by month and an interrupted time series design was chosen to assess the impact the CAN SPAM Act had on spam. Analyses revealed that the CAN SPAM Act had no observable impact on the amount of spam sent and received; no impact on two of three CAN SPAM laws complied with among spam emails, the remaining law of which there was a significant decrease in compliance after the Act; and no impact on the number of spam emails sent from within the United States. Implications of these findings and suggestions for policy are discussed.
165

Using Spammers' Computing Resources for Volunteer Computing

Bui, Thai Le Quy 13 March 2014 (has links)
Spammers are continually looking to circumvent counter-measures seeking to slow them down. An immense amount of time and money is currently devoted to hiding spam, but not enough is devoted to effectively preventing it. One approach for preventing spam is to force the spammer's machine to solve a computational problem of varying difficulty before granting access. The idea is that suspicious or problematic requests are given difficult problems to solve while legitimate requests are allowed through with minimal computation. Unfortunately, most systems that employ this model waste the computing resources being used, as they are directed towards solving cryptographic problems that provide no societal benefit. While systems such as reCAPTCHA and FoldIt have allowed users to contribute solutions to useful problems interactively, an analogous solution for non-interactive proof-of-work does not exist. Towards this end, this paper describes MetaCAPTCHA and reBOINC, an infrastructure for supporting useful proof-of-work that is integrated into a web spam throttling service. The infrastructure dynamically issues CAPTCHAs and proof-of-work puzzles while ensuring that malicious users solve challenging puzzles. Additionally, it provides a framework that enables the computational resources of spammers to be redirected towards meaningful research. To validate the efficacy of our approach, prototype implementations based on OpenCV and BOINC are described that demonstrate the ability to harvest spammer's resources for beneficial purposes.
166

Topic Modeling and Spam Detection for Short Text Segments in Web Forums

Sun, Yingcheng 28 January 2020 (has links)
No description available.
167

Naive Bayesian Spam Filters for Log File Analysis

Havens, Russel William 13 July 2011 (has links) (PDF)
As computer system usage grows in our world, system administrators need better visibility into the workings of computer systems, especially when those systems have problems or go down. Most system components, from hardware, through OS, to application server and application, write log files of some sort, be it system-standardized logs such syslog or application specific logs. These logs very often contain valuable clues to the nature of system problems and outages, but their verbosity can make them difficult to utilize. Statistical data mining methods could help in filtering and classifying log entries, but these tools are often out of the reach of administrators. This research tests the effectiveness of three off-the-shelf Bayesian spam email filters (SpamAssassin, SpamBayes and Bogofilter) for effectiveness as log entry classifiers. A simple scoring system, the Filter Effectiveness Scale (FES), is proposed and used to compare these filters. These filters are tested in three stages: 1) the filters were tested with the SpamAssassin corpus, with various manipulations made to the messages, 2) the filters were tested for their ability to differentiate two types of log entries taken from actual production systems, and 3) the filters were trained on log entries from actual system outages and then tested on effectiveness for finding similar outages via the log files. For stage 1, messages were tested with normalized bodies, normalized headers and with each sentence from each message body as a separate message with a standardized message. The impact of each manipulation is presented. For stages 2 and 3, log entries were tested with digits normalized to zeros, with words chained together to various lengths and one or all levels of word chains used together. The impacts of these manipulations are presented. In each of these stages, it was found that these widely available Bayesian content filters were effective in differentiating log entries. Tables of correct match percentages or score graphs, according to the nature of tests and numbers of entries are presented, are presented, and FES scores are assigned to the filters according to the attributes impacting their effectiveness. This research leads to the suggestion that simple, off-the-shelf Bayesian content filters can be used to assist system administrators and log mining systems in sifting log entries to find entries related to known conditions (for which there are example log entries), and to exclude outages which are not related to specific known entry sets.
168

Improving Filtering of Email Phishing Attacks by Using Three-Way Text Classifiers

Trevino, Alberto 13 March 2012 (has links) (PDF)
The Internet has been plagued with endless spam for over 15 years. However, in the last five years spam has morphed from an annoying advertising tool to a social engineering attack vector. Much of today's unwanted email tries to deceive users into replying with passwords, bank account information, or to visit malicious sites which steal login credentials and spread malware. These email-based attacks are known as phishing attacks. Much has been published about these attacks which try to appear real not only to users and subsequently, spam filters. Several sources indicate traditional content filters have a hard time detecting phishing attacks because the emails lack the traditional features and characteristics of spam messages. This thesis tests the hypothesis that by separating the messages into three categories (ham, spam and phish) content filters will yield better filtering performance. Even though experimentation showed three-way classification did not improve performance, several additional premises were tested, including the validity of the claim that phishing emails are too much like legitimate emails and the ability of Naive Bayes classifiers to properly classify emails.
169

Splined Speed Control using SpAM (Speed-based Acceleration Maps) for an Autonomous Ground Vehicle

Anderson, David 15 April 2008 (has links)
There are many forms of speed control for an autonomous ground vehicle currently in development. Most use a simple PID controller to achieve a speed specified by a higher-level motion planning algorithm. Simple controllers may not provide a desired acceleration profile for a ground vehicle. Also, without extensive tuning the PID controller may cause excessive speed overshoot and oscillation. This paper examines an approach that was designed to allow a greater degree of control while reducing the computing load on the motion planning software. The SpAM+PI (Speed-based Acceleration Map + Proportional Integral controller) algorithm outlined in this paper uses three inputs: current velocity, desired velocity and desired maximum acceleration, to determine throttle and brake commands that will allow the vehicle to achieve its correct speed. Because this algorithm resides on an external controller it does not add to the computational load of the motion planning computer. Also, with only two inputs that are needed only when there is a change in desired speed or maximum desired acceleration, network traffic between the computers can be greatly reduced. The algorithm uses splines to smoothly plan a speed profile from the vehicle's current speed to its desired speed. It then uses a lookup table to determine the correct pedal position (throttle or brake) using the current vehicle speed and a desired instantaneous acceleration that was determined in the splining step of the algorithm. Once the pedal position is determined a PI controller is used to minimize error in the system. The SpAM+PI approach is a novel approach to the speed control of an autonomous vehicle. This academic experiment is tested using Odin, Team Victor Tango's entry into the 2007 DARPA Urban Challenge which won 3rd place and a $500,000 prize. The evaluation of the algorithm exposed both strengths and weaknesses that guide the next step in the development of a speed control algorithm. / Master of Science
170

Der Vorteil des ersten Zugriffs durch "Webpositioning" - das Internet als Schnittstelle von Markenrecht und Wettbewerbsrecht /

Rousseau, Marc-André. January 2007 (has links) (PDF)
Universiẗat, Diss.--Freiburg. i. Br., 2005. / Literaturverz. S. 274 - 285.

Page generated in 0.0228 seconds