1 |
Creating a Secure Server Architecture and Policy for Linux-based SystemsKourtesis, Marios January 2015 (has links)
Creating and maintaining servers for hosting services in a secure and reliable way is an important but complex and time-consuming task. Misconfiguration and lack of server maintenance can potentially make the system vulnerable. Hackers can exploit these vulnerabilities in order to penetrate into the system internals and cause damage. Having a standard architecture/configuration supporting the needed services saves time and resources while it reduces security risks. A server architecture protected by a security policy can secure the integrity and quality of the overall services. This research demonstrates building a secure server architecture protected by a security policy. To achieve this a security policy and a checklist was designed and combined with a host based IDPS, a NMS and a WAF.
|
2 |
W2R: an ensemble Anomaly detection model inspired by language models for web application firewalls securityWang, Zelong, AnilKumar, Athira January 2023 (has links)
Nowadays, web application attacks have increased tremendously due to the large number of users and applications. Thus, industries are paying more attention to using Web application Firewalls and improving their security which acts as a shield between the app and the internet by filtering and monitoring the HTTP traffic. Most works focus on either traditional feature extraction or deep methods that require no feature extraction method. We noticed that a combination of an unsupervised language model and a classic dimension reduction method is less explored for this problem. Inspired by this gap, we propose a new unsupervised anomaly detection model with better results than the existing state-of-the-art model for anomaly detection in WAF security. This paper focuses on this structure to explore WAF security: 1) feature extraction from HTTP traffic packets by using NLP (natural language processing) methods such as word2vec and Bert, and 2) Dimension reduction by PCA and Autoencoder, 3) Using different types of anomaly detection techniques including OCSVM, isolation forest, LOF and combination of these algorithms to explore how these methods affect results. We used the datasets CSIC 2010 and ECML/PKDD 2007 in this paper, and the model has better results.
|
3 |
Project X : All-in-one WAF testing toolAnantaprayoon, Amata January 2020 (has links)
Web Application Firewall (WAF) is used to protect the Web application (web app). One of the advantages of having WAF is, it can detect possible attacks even if there is no validation implemented on the web app. But how can WAF protect the web app if WAF itself is vulnerable? In general, four testing methods are used to test WAF such as fuzzing, payload execution, bypassing, and footprinting. There are several open-source WAF testing tools but it appears that it only offers one or two testing methods. That means a tester is required to have multiple tools and learn how each tool works to be able to test WAF using all testing methods. This project aims to solve this difficulty by developing a WAF testing tool called ProjectX that offers all testing methods. ProjectX has been tested on a testing environment and the results show that it fulfilled its requirements. Moreover, ProjectX is available on Github for any developer who want to improve or add more functionality to it.
|
4 |
Detection of Vulnerability Scanning Attacks using Machine Learning : Application Layer Intrusion Detection and Prevention by Combining Machine Learning and AppSensor Concepts / Detektering av sårbarhetsscanning med maskininlärning : Detektering och förhindrande av attacker i applikationslagret genom kombinationen av maskininlärning och AppSensor konceptShahrivar, Pojan January 2022 (has links)
It is well-established that machine learning techniques have been used with great success in other domains and has been leveraged to deal with sources of evolving abuse, such as spam. This study aims to determine whether machine learning techniques can be used to create a model that detects vulnerability scanning attacks using proprietary real-world data collected from tCell, a web application firewall. In this context, a vulnerability scanning attack is defined as an automated process that detects and classifies security weaknesses and flaws in the web application. To test the hypothesis that machine learning techniques can be used to create a detection model, twenty four models were trained. The models showed a high level of precision and recall, ranging from 91% to 0.96% and 85% to 0.93%, respectively. Although the classification performance was strong, the models were not calibrated sufficiently which resulted in an underconfidence in the predictions. The results can therefore been viewed as a performance baseline. Nevertheless, the results demonstrate an advancement over the simplistic threshold-based techniques developed in the early days of the internet, but require further research and development to tune and calibrate the models. / Det är väletablerat att tekniker för maskininlärning har använts med stor framgång inom andra domäner och har utnyttjats för att hantera källor till växande missbruk, såsom spam. Denna studie syftar till att avgöra om maskininlärningstekniker kan tillämpas för att skapa en modell som upptäcker sårbarhets-skanningsattacker med hjälp av proprietär data som samlats in från tCell, en webbapplikationsbrandvägg. I detta sammanhang definieras en sårbarhetsskanningsattack som en automatiserad process som upptäcker och klassificerar säkerhetsbrister och brister i webb-applikationen. För att testa hypotesen att maskininlärningstekniker kan användas för att skapa en detektionsmodell, tränades tjugofyra modeller. Modellerna visade en hög nivå av precision och sensitivitet, från 91% till 0,96% och 85% till 0,93%, respektive. Även om klassificeringsprestandan var god, var modellerna inte tillräckligt kalibrerade, vilket resulterade i ett svagt förtoende för förutsägelserna. De presenterade resultaten kan därför ses som en prestationsbaslinje. Resultaten visar ett framsteg över de förenklade tröskelbaserade teknikerna som utvecklades i begynnelsen av internet, men kräver ytterligare forskning och utveckling för att kalibrera modellerna.
|
Page generated in 0.1006 seconds