• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 81
  • 34
  • 28
  • 27
  • 17
  • 17
  • 16
  • 16
  • 15
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Grid Fault management techniques: the case of a Grid environment with malicious entities

Akimana, Rachel 01 October 2008 (has links)
La tolérance et la gestion des fautes dans les grilles de données/calcul est d’une importance capitale. En effet, comme dans tout autre système distribué, les composants d’une grille sont susceptibles de tomber en panne à tout moment. Mais le risque de panne croît avec la taille du système, et est donc plus exacerbé dans un système de grille. En plus, tout en essayant de mettre à profit les ressources offertes par la grille, les applications tournant sur celle-ci sont de plus en plus complexes (ex. impliquent des interactions complexes, prennent des jours d’exécution), ce qui les rend plus vulnérables aux fautes. Le plus difficile dans la gestion des fautes dans une grille, c’est qu’il est difficile de savoir si une faute qui survient sur une entité de la grille est induite malicieusement ou accidentellement. Dans notre travail de thèse, nous utilisons le terme faute, au sens large, pour faire référence à tout étant inattendu qui survient sur tout composant de la grille. Certains de ces états provoquent des comportements aussi inattendus et perceptibles au niveau de la grille tandis que d’autres passent inaperçues. De plus, certaines de ces fautes sont le résultat d’une action malveillante alors que d’autres surviennent accidentellement ou instantanément. Dans ce travail de thèse, nous avons traité le cas de ces fautes induites malicieusement, et qui généralement passent inaperçues. Nous avons considéré en particulier le problème de la confidentialité et de l’intégrité des données stockées à long-terme sur la grille. L’étude de la confidentialité des données a été faite en deux temps dont la première partie concerne la confidentialité des données actives. Dans cette partie, nous avons considéré une application liée à la recherche des similitudes d’une séquence d’ADN dans une base de données contenant des séquences d’ADN et stockée sur la grille. Pour cela, nous avons proposé une méthode qui permet d’effectuer la comparaison sur un composant distant, mais tout en gardant confidentielle la séquence qui fait l’objet de la comparaison. Concernant les données passives, nous avons proposé une méthode de partage des données confidentielles et chiffrés sur la grille. En rapport avec l’intégrité des données, nous avons considéré le cas des données anonymes dans le cadre de l’intégrité des données passives. Pour les données actives, nous avons considéré le problème de la corruption des jobs exécutés sur la grille. Pour chacune des cas, nous avons proposé des mécanismes permettant de vérifier l’authenticité des données utilisées ou produites par ces applications.
12

Malicious DHTML Detection by Model-based Reasoning

Lin, Shih-Fen 21 August 2007 (has links)
¡@Including of HTML, client-side script, and other relative technology, Dynamic HTML (DHTML) is a mechanism of creating dynamic contents in a web page. Nowadays, because of the demand of dynamic web pages and the diffusion of web applications, attackers get a new, easily-spread, and hard-detected intrusion vector ¡Ð DHTML. And commercial anti-virus softwares, commonly using pattern-matching approach, still have weakness against commonly obfuscated malicious DHTML. ¡@According to this condition, we propose a new detective algorithm Model-based Reasoning (MoBR), basing on the respects of model and reasoning, that is resilient to common obfuscations used by attackers and can correctly determine whether a webpage is malicious or not. Through describing text and semantic signatures, we constructs the model of a malicious DHTML by the mechanism of templates. Experimental evaluation by actual DHTML demonstrates that our detection algorithm is tolerant to obfuscation and perform much superior to commercial anti-virus softwares. Furthermore, it can detect variants of malicious DHTML with a low false positive rate.
13

Robust training sequence design for cooperative communications

Huang, Chiun-wei 21 July 2010 (has links)
Recently, the difficulty of placing multiple antennas onto a mobile terminal to exploit more diversity has been solved by using the cooperative communication technique, in which several relay nodes with a single antenna partner with each other to serve as virtual multiple antennas for providing the spatial diversity. Many existing researches in cooperative communication focuses on designing relay strategies to achieve better communication performance. However, most of their designs require the channel state information (CSI) being perfectly known. Unfortunately, CSI is generally unknown in practice. Therefore, before getting benefits brought by the relay-assisted network, it is necessary to obtain accurate channel state information (CSI) at the destination or relays. In this thesis, we also consider the training design for channel estimation in the AF relay network. The involvement of multiple relay nodes to exploit space diversity in cooperative communications requires sophisticated and complicated protocols, which poses a difficulty in avoiding all possible misbehaving relay nodes. Therefore, the channel estimation scheme in cooperative communication network needs to be robust against the possible relay misbehaviors. However, most prior works focused on developing channel estimation schemes by assuming perfect relayassisted communication protocol. By contrast, this work focuses on designing robust channel estimation schemes to combat the possible presence of the relay misbehaviors. Besides considering the robust design against relay misbehaviors, this work also considers more general channel model when designing the training sequence and channel estimation scheme. Specifically, in contrast to assume independent channels across relays, this thesis considers the correlated channels in both phases and the correlated noises in the first phase. Overall, the main problem of this work is to design robust channel estimation and training sequences against relay misbehaviors when the communication channels within the cooperative network are not restricted to be independent.
14

Malicious URL Detection in Social Network

Su, Qun-kai 15 August 2011 (has links)
Social network web sites become very popular nowadays. Users can establish connections with other users forming a social network, and quickly share information, photographs, and videos with friends. Malwares called social network worms can send text messages with malicious URLs by employing social engineering techniques. They are trying let users click malicious URL and infect users. Also, it can quickly attack others by infected user accounts in social network. By curiosity, most users click it without validation. This thesis proposes a malicious URL detection method used in Facebook wall, which used heuristic features with high classification property and machine learning algorithm, to predict the safety of URL messages. Experiments show that, the proposed approach can achieve about 96.3% of True Positive Rate, 95.4% of True Negative Rate, and 95.7% of Accuracy.
15

Malicious Web Page Detection Based on Anomaly Behavior

Tsai, Wan-yi 04 February 2009 (has links)
Because of the convenience of the Internet, we rely closely on the Internet to do information searching and sharing, forum discussion, and online services. However, most of the websites we visit are developed by people with limited security knowledge, and this condition results in many vulnerabilities in web applications. Unfortunately, hackers have successfully taken advantage of these vulnerabilities to inject malicious JavaScript into compromised web pages to trigger drive-by download attacks. Based on our long time observation of malicious web pages, malicious web pages have unusual behavior for evading detection which makes malicious web pages different form normal ones. Therefore, we propose a client-side malicious web page detection mechanism named Web Page Checker (WPC) which is based on anomaly behavior tracing and analyzing to identify malicious web pages. The experimental results show that our method can identify malicious web pages and alarm the website visitors efficiently.
16

Understanding and Defending Against Malicious Identities in Online Social Networks

Cao, Qiang January 2014 (has links)
<p>Serving more than one billion users around the world, today's online </p><p>social networks (OSNs) pervade our everyday life and change the way people </p><p>connect and communicate with each other. However, the open nature of </p><p>OSNs attracts a constant interest in attacking and exploiting them. </p><p>In particular, they are vulnerable to various attacks launched through </p><p>malicious accounts, including fake accounts and compromised real user </p><p>accounts. In those attacks, malicious accounts are used to send out </p><p>spam, spread malware, distort online voting, etc.</p><p>In this dissertation, we present practical systems that we have designed </p><p>and built to help OSNs effectively throttle malicious accounts. The overarching </p><p>contribution of this dissertation is the approaches that leverage the fundamental </p><p>weaknesses of attackers to defeat them. We have explored defense schemes along </p><p>two dimensions of an attacker's weaknesses: limited social relationships </p><p>and strict economic constraints.</p><p>The first part of this dissertation focuses on how to leverage social </p><p>relationship constraints to detect fake accounts. We present SybilRank, a novel </p><p>social-graph-based detection scheme that can scale up to OSNs with billions of </p><p>users. SybilRank is based on the observation that the social connections between </p><p>fake accounts and real users, called attack edges, are limited. It formulates </p><p>the detection as scalable user ranking according to the landing probability of </p><p>early-terminated random walks on the social graph. SybilRank generates an informative </p><p>user-ranked list with a substantial fraction of fake accounts at the bottom, </p><p>and bounds the number of fake accounts that are ranked higher than legitimate </p><p>users to O(log n) per attack edge, where n is the total number of users. We have </p><p>demonstrated the scalability of SybilRank via a prototype on Hadoop MapReduce, </p><p>and its effectiveness in the real world through a live deployment at Tuenti, </p><p>the largest OSN in Spain.</p><p>The second part of this dissertation focuses on how to exploit an attacker's </p><p>economic constraints to uncover malicious accounts. We present SynchroTrap, a system </p><p>that uncovers large groups of active malicious accounts, including both fake </p><p>accounts and compromised accounts, by detecting their loosely synchronized actions.</p><p>The design of SynchroTrap is based on the observation that malicious accounts usually </p><p>perform loosely synchronized actions to accomplish an attack mission, due to </p><p>limited budgets, specific mission goals, etc. SynchroTrap transforms the detection </p><p>into a scalable clustering algorithm. It uncovers large groups of accounts </p><p>that act similarly at around the same time for a sustained period of time. To </p><p>handle the enormous volume of user action data in large OSNs, we designed SynchroTrap</p><p>as an incremental processing system that processes small data chunks on a daily </p><p>basis but aggregates the computational results over the continuous data stream. </p><p>We implemented SynchroTrap on Hadoop and Giraph, and we deployed it on Facebook </p><p>and Instagram. This deployment has resulted in the unveiling of millions of malicious </p><p>accounts and thousands of large attack campaigns per month.</p> / Dissertation
17

Challenging Policies That Do Not Play Fair: A Credential Relevancy Framework Using Trust Negotiation Ontologies

Leithead, Travis S. 29 August 2005 (has links) (PDF)
This thesis challenges the assumption that policies will "play fair" within trust negotiation. Policies that do not "play fair" contain requirements for authentication that are misleading, irrelevant, and/or incorrect, based on the current transaction context. To detect these unfair policies, trust negotiation ontologies provide the context to determine the relevancy of a given credential set for a particular negotiation. We propose a credential relevancy framework for use in trust negotiation that utilizes ontologies to process the set of all available credentials C and produce a subset of credentials C' relevant to the context of a given negotiation. This credential relevancy framework reveals the credentials inconsistent with the current negotiation and detects potentially malicious policies that request these credentials. It provides a general solution for detecting policies that do not "play fair," such as those used in credential phishing attacks, malformed policies, and malicious strategies. This thesis motivates the need for a credential relevancy framework, outlines considerations for designing and implementing it (including topics that require further research), and analyzes a prototype implementation. The credential relevancy framework prototype, analyzed in this thesis, has the following two properties: first, it incurs less than 10% extra execution time compared to a baseline trust negotiation prototype (e.g., TrustBuilder); second, credential relevance determination does not compromise the desired goals of trust negotiation—transparent and automated authentication in open systems. Current trust negotiation systems integrated with a credential relevancy framework will be enabled to better defend against users that do not always "play fair" by incorporating a credential relevancy framework.
18

Detecting Malicious Software By Dynamicexecution

Dai, Jianyong 01 January 2009 (has links)
Traditional way to detect malicious software is based on signature matching. However, signature matching only detects known malicious software. In order to detect unknown malicious software, it is necessary to analyze the software for its impact on the system when the software is executed. In one approach, the software code can be statically analyzed for any malicious patterns. Another approach is to execute the program and determine the nature of the program dynamically. Since the execution of malicious code may have negative impact on the system, the code must be executed in a controlled environment. For that purpose, we have developed a sandbox to protect the system. Potential malicious behavior is intercepted by hooking Win32 system calls. Using the developed sandbox, we detect unknown virus using dynamic instruction sequences mining techniques. By collecting runtime instruction sequences in basic blocks, we extract instruction sequence patterns based on instruction associations. We build classification models with these patterns. By applying this classification model, we predict the nature of an unknown program. We compare our approach with several other approaches such as simple heuristics, NGram and static instruction sequences. We have also developed a method to identify a family of malicious software utilizing the system call trace. We construct a structural system call diagram from captured dynamic system call traces. We generate smart system call signature using profile hidden Markov model (PHMM) based on modularized system call block. Smart system call signature weakly identifies a family of malicious software.
19

Intelligent Honeypot Agents for Detection of Blackhole Attack in Wireless Mesh Networks

Prathapani, Anoosha January 2010 (has links)
No description available.
20

A MACHINE LEARNING BASED WEB SERVICE FOR MALICIOUS URL DETECTION IN A BROWSER

Hafiz Muhammad Junaid Khan (8119418) 12 December 2019 (has links)
Malicious URLs pose serious cyber-security threats to the Internet users. It is critical to detect malicious URLs so that they could be blocked from user access. In the past few years, several techniques have been proposed to differentiate malicious URLs from benign ones with the help of machine learning. Machine learning algorithms learn trends and patterns in a data-set and use them to identify any anomalies. In this work, we attempt to find generic features for detecting malicious URLs by analyzing two publicly available malicious URL data-sets. In order to achieve this task, we identify a list of substantial features that can be used to classify all types of malicious URLs. Then, we select the most significant lexical features by using Chi-Square and ANOVA based statistical tests. The effectiveness of these feature sets is then tested by using a combination of single and ensemble machine learning algorithms. We build a machine learning based real-time malicious URL detection system as a web service to detect malicious URLs in a browser. We implement a chrome extension that intercepts a browser’s URL requests and sends them to web service for analysis. We implement the web service as well that classifies a URL as benign or malicious using the saved ML model. We also evaluate the performance of our web service to test whether the service is scalable.

Page generated in 0.0519 seconds