• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 29
  • 21
  • 15
  • 11
  • 9
  • 8
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 208
  • 83
  • 51
  • 42
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • 22
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Anti-Spam Study: an Alliance-based Approach

Chiu, Yu-fen 12 September 2006 (has links)
The growing problem of spam has generated a need for reliable anti-spam filters. There are many filtering techniques along with machine learning and data miming used to reduce the amount of spam. Such algorithms can achieve very high accuracy but with some amount of false positive tradeoff. Generally false positives are prohibitively expensive in the real world. Much work has been done to improve specific algorithms for the task of detecting spam, but less work has been report on leveraging multiple algorithms in email analysis. This study presents an alliance-based approach to classify, discovery and exchange interesting information on spam. Furthermore, the spam filter in this study is build base on the mixture of rough set theory (RST), genetic algorithm (GA) and XCS classifier system. RST has the ability to process imprecise and incomplete data such as spam. GA can speed up the rate of finding the optimal solution (i.e. the rules used to block spam). The reinforcement learning of XCS is a good mechanism to suggest the appropriate classification for the email. The results of spam filtering by alliance-based approach are evaluated by several statistical methods and the performance is great. Two main conclusions can be drawn from this study: (1) the rules exchanged from other mail servers indeed help the filter blocking more spam than before. (2) a combination of algorithms improves both accuracy and reducing false positives for the problem of spam detection.
112

Effective identities for trusted interactions in converged telecommunication systems

Balasubramaniyan, Vijay A. 08 July 2011 (has links)
Telecommunication systems have evolved significantly and the recent convergence of telephony allows users to communicate through landlines, mobile phones and Voice over IP (VoIP) phones. Unfortunately this convergence has resulted in easy manipulation of caller identity, resulting in both VoIP spam and Caller ID spoofing. In this dissertation, we introduce the notion of effective identity which is a combination of mechanisms to (1) establish identity of the caller that is harder to manipulate, and (2) provide additional information about the caller. We first use effective identities to address the VoIP spam problem by proposing CallRank, a novel mechanism built around call duration and social network linkages to differentiate between a legitimate user and a spammer. To ensure that this mechanism is privacy preserving, we create a token framework that allows a user to prove the existence of a social network path between him and the user he is trying to initiate contact with, without actually revealing the path. We then look at the broader issue of determining identity across the entire telecommunication landscape to address Caller ID spoofing. Towards this, we develop PinDr0p, a technique to determine the provenance of a call - the source and the path taken by a call. In the absence of any verifiable metadata, provenance offers a means of uniquely identifying a call source. Finally, we use anomalies in timbre to develop London Calling, a mechanism to identify geography of a caller. Together, the contributions made in this dissertation create effective identities that can help address the new threats in a converged telecommunication infrastructure.
113

Workshop Mensch-Computer-Vernetzung

Hübner, Uwe 15 October 2003 (has links)
Workshop Mensch-Computer-Vernetzung vom 14.-17. April 2003 in Löbsal (bei Meißen)
114

Netz- und Service-Infrastrukturen

Hübner, Uwe 21 May 2004 (has links)
Workshop "Netz- und Service-Infrastrukturen" vom 19.-22. April 2004 in Löbsal (bei Meißen)
115

Hat Bayes eine Chance?

Sontag, Ralph 10 May 2004 (has links) (PDF)
Workshop "Netz- und Service-Infrastrukturen" Hat Bayes eine Chance? Seit einigen Monaten oder Jahren werden verstärkt Bayes-Filter eingesetzt, um die Nutz-E-Mail ("`Ham"') vom unerwünschten "`Spam"' zu trennen. Diese stoßen jedoch leicht an ihre Grenzen. In einem zweiten Abschnitt wird ein Filtertest der Zeitschrift c't genauer analysiert.
116

Combating Threats to the Quality of Information in Social Systems

Lee, Kyumin 16 December 2013 (has links)
Many large-scale social systems such as Web-based social networks, online social media sites and Web-scale crowdsourcing systems have been growing rapidly, enabling millions of human participants to generate, share and consume content on a massive scale. This reliance on users can lead to many positive effects, including large-scale growth in the size and content in the community, bottom-up discovery of “citizen-experts”, serendipitous discovery of new resources beyond the scope of the system designers, and new social-based information search and retrieval algorithms. But the relative openness and reliance on users coupled with the widespread interest and growth of these social systems carries risks and raises growing concerns over the quality of information in these systems. In this dissertation research, we focus on countering threats to the quality of information in self-managing social systems. Concretely, we identify three classes of threats to these systems: (i) content pollution by social spammers, (ii) coordinated campaigns for strategic manipulation, and (iii) threats to collective attention. To combat these threats, we propose three inter-related methods for detecting evidence of these threats, mitigating their impact, and improving the quality of information in social systems. We augment this three-fold defense with an exploration of their origins in “crowdturfing” – a sinister counterpart to the enormous positive opportunities of crowdsourcing. In particular, this dissertation research makes four unique contributions: • The first contribution of this dissertation research is a framework for detecting and filtering social spammers and content polluters in social systems. To detect and filter individual social spammers and content polluters, we propose and evaluate a novel social honeypot-based approach. • Second, we present a set of methods and algorithms for detecting coordinated campaigns in large-scale social systems. We propose and evaluate a content- driven framework for effectively linking free text posts with common “talking points” and extracting campaigns from large-scale social systems. • Third, we present a dual study of the robustness of social systems to collective attention threats through both a data-driven modeling approach and deploy- ment over a real system trace. We evaluate the effectiveness of countermeasures deployed based on the first moments of a bursting phenomenon in a real system. • Finally, we study the underlying ecosystem of crowdturfing for engaging in each of the three threat types. We present a framework for “pulling back the curtain” on crowdturfers to reveal their underlying ecosystem on both crowdsourcing sites and social media.
117

Towards improving e-mail content classification for spam control: architecture, abstraction, and strategies

Marsono, Muhammad Nadzir 28 August 2007 (has links)
This dissertation discusses techniques to improve the effectiveness and the efficiency of spam control. Specifically, layer-3 e-mail content classification is proposed to allow e-mail pre-classification (for fast spam detection at receiving e-mail servers) and to allow distributed processing at network nodes for fast spam detection at spam control points, e.g., at e-mail servers. Fast spam detection allows prioritizing e-mail servicing at receiving e-mail servers to safeguard non-spam e-mail deliveries even under heavy spam traffic. Fast spam detection also allows spam rejection during Simple Mail Transfer Protocol sessions for inbound and outbound spam control. We have four contributions in the dissertation. In our first contribution, we propose a hardware architecture for naive Bayes content classification unit for a high-throughput spam detection computation. We use the logarithmic number system to simplify the naive Bayes computation. To handle the fast but lossy logarithmic number system computation, we analyze the noise model of our hardware architecture. Through noise analysis, synthesis, and verification by numerical simulation, we show that the naive Bayes classification unit, implemented on FPGA is capable of processing, with very low computation noise, more than one hundred million features per second, an order of magnitude faster than that on a general-purpose processor implementation. In our second contribution, we propose e-mail content pre-classification at network layer (layer 3) instead of at application layer (layer 7) as currently being practiced to allow e-mail packet pre-classification and distributed processing for effective spam detection beyond server implementations. By performing e-mail content classification at a lower abstraction level, e-mail packets can be pre-processed, without reassembly, at any network node between sender and receiver. We demonstrated that the naive Bayes e-mail content classification can be adapted for layer-3 processing. We also show that fast e-mail class estimation can be performed at receiving e-mail servers. Through simulation using e-mail data sets, we showed that the layer-3 e-mail content classification is capable of detecting spam with accuracy and false positive values that approximately equal the ones at layer 7. In our third contribution, we propose a prioritized e-mail servicing scheme using a priority queuing approach to improve spam handling at receiving e-mail servers. In this scheme, priority is given higher to non-spam e-mails than spam. Four servicing strategies for the proposed scheme are studied. We analyzed the performance of this scheme under different e-mail traffic loads and service capacities. We show that the non-spam delay and loss probability can be reduced when the server is under-provisioned. In our fourth contribution, we propose a spam handling scheme that rejects spam during Simple Mail Transfer Protocol sessions. The proposed spam handling scheme allows inbound and outbound spam control. It is capable of reducing servers' loadings and hence, non-spam queuing delay and loss probability. We analyze the performance of this scheme under different e-mail traffic loads and service capacities. We show that the non-spam delay and loss probability can be reduced when the server is under-provisioned. In this dissertation, we present four techniques to improve spam control based on e-mail content classification. We envision that our proposed approaches complement rather than replace the current spam control systems. The proposed four approaches are capable to work with existing spam control systems and support proactive spam and other e-mail-based threats such as phishing and e-mail worm controls anywhere across the Internet.
118

Framework for botnet emulation and analysis

Lee, Christopher Patrick 12 March 2009 (has links)
Criminals use the anonymity and pervasiveness of the Internet to commit fraud, extortion, and theft. Botnets are used as the primary tool for this criminal activity. Botnets allow criminals to accumulate and covertly control multiple Internet-connected computers. They use this network of controlled computers to flood networks with traffic from multiple sources, send spam, spread infection, spy on users, commit click fraud, run adware, and host phishing sites. This presents serious privacy risks and financial burdens to businesses and individuals. Furthermore, all indicators show that the problem is worsening because the research and development cycle of the criminal industry is faster than that of security research. To enable researchers to measure botnet connection models and counter-measures, a flexible, rapidly augmentable framework for creating test botnets is provided. This botnet framework, written in the Ruby language, enables researchers to run a botnet on a closed network and to rapidly implement new communication, spreading, control, and attack mechanisms for study. This is a significant improvement over augmenting C++ code-bases for the most popular botnets, Agobot and SDBot. Rubot allows researchers to implement new threats and their corresponding defenses before the criminal industry can. The Rubot experiment framework includes models for some of the latest trends in botnet operation such as peer-to-peer based control, fast-flux DNS, and periodic updates. Our approach implements the key network features from existing botnets and provides the required infrastructure to run the botnet in a closed environment.
119

A spam-detecting artificial immune system /

Oda, Terri January 1900 (has links)
Thesis (M.C.S.)--Carleton University, 2005. / Includes bibliographical references (p. 115-123). Also available in electronic format on the Internet.
120

E-shape analysis

Sroufe, Paul. Dantu, Ram, January 2009 (has links)
Thesis (M.S.)--University of North Texas, Dec., 2009. / Title from title page display. Includes bibliographical references.

Page generated in 0.0178 seconds