11 |
Evaluating and comparing the web application security testing tools: Identifying and Applying Key MetricsThota, Sanmay Bhavanish, Vemula, Sai Ajit Jayasimha January 2024 (has links)
Background: Web application security (WAS) testing is crucial for protecting web applications from cyber threats. However, organizations often struggle to select effective WAS testing tools due to the lack of a well-defined set of evaluation criteria. This research aims to address this need by identifying the key metrics for evaluating and comparing WAS testing tools. Objectives: The primary objectives of this research are to identify the key metrics for comparing WAS testing tools, validate the significance of these metrics through semi-structured interviews, and perform a comparison between WAS testing tools using the validated metrics. This research aims to find a set of validated metrics for evaluating and comparing WAS testing tools. Methods: The research methodology consisted of three main phases: a literature review to compile a comprehensive set of technical and non-technical metrics commonly used for assessing and comparing WAS testing tools, semi-structured interviews with security experts to validate the significance of the identified metrics, and an experiment to compare three WAS testing tools - ZAP, Burp Suite, and Acunetix - using the OWASP Benchmark project. These three tools were selected based on the author’s recommendations in the literature. Results: The initial literature review found 37 evaluation metrics for WAS testing tools. Through interviews, experts confirmed some of these were important, but also said some were not very useful. The experts additionally suggested some new metrics that were not in the literature. Incorporating this feedback, the final list was refined down to 35 metrics for evaluating WAS testing tools. An experiment was then conducted to compare three WAS testing tools - ZAP, Burp Suite, and Acunetix with the test subject as the OWASP Benchmark Project and by using the validated set of metrics. The results of this experiment revealed differences in the performance of the tools, with Burp Suite emerging as the best performer. Conclusions: This research has provided a valid set of metrics for comparing and evaluating WAS testing tools, empowering organizations to make more informed decisions. Security professionals can optimise their WAS testing tool selection by understanding the key metrics and their relative significance, as established through the literature and interviews. Based on the experimental analysis, Burp Suite performed better than other tools. Therefore, for organizations initiating the selection process of the WAS testing tool, Burp Suite stands out as a good choice.
|
12 |
Using a Web Server Test Bed to Analyze the Limitations of Web Application Vulnerability ScannersShelly, David Andrew 17 September 2010 (has links)
The threat of cyber attacks due to improper security is a real and evolving danger. Corporate and personal data is breached and lost because of web application vulnerabilities thousands of times every year. The large number of cyber attacks can partially be attributed to the fact that web application vulnerability scanners are not used by web site administrators to scan for flaws. Web application vulnerability scanners are tools that can be used by network administrators and security experts to help prevent and detect vulnerabilities such as SQL injection, buffer overflows, cross-site scripting, malicious file execution, and session hijacking.
However, these tools have been found to have flaws and limitations as well. Research has shown that web application vulnerability scanners are not capable of always detecting vulnerabilities and attack vectors, and do not give effective measurements of web application security. This research presents a method to analyze the flaws and limitations of several of the most popular commercial and free/open-source web application scanners by using a secure and insecure version of a custom-built web application. Using this described method, key improvements that should be made to web application scanner techniques to reduce the number of false-positive and false-negative results are proposed. / Master of Science
|
13 |
Empirically Driven Investigation of Dependability and Security Issues in Internet-Centric SystemsHuynh, Toan Nguyen Duc 06 1900 (has links)
The Web, being the most popular component of the Internet, has been transformed from a static information-serving medium into a fully interactive platform. This platform has been used by developers to create web applications rivaling traditional desktop systems. Designing, developing and evaluating these applications require new or modified methodologies, techniques and tools because of the different characteristics they exhibit. This dissertation discusses two important areas for developing and evaluating these applications: security and data mining.
In the security area, a survey using a process similar to the Goal Question Metric approach examines the properties of web application vulnerabilities. Using results from the survey, a white-box approach to identify web applications vulnerabilities is proposed. Although the approach eliminates vulnerabilities during the development process, it does not protect existing web applications that have not utilized the approach. Hence, an Anomaly-based Network Intrusion Detection System, called AIWAS, is introduced. AIWAS protects web applications through the analysis of interactions between the users and the web applications. These interactions are classified as either benign or malicious; malicious interactions are prevented from reaching the web applications under protection.
In the data mining area, the method of reliability estimation from server logs is examined in detail. This examination reveals the fact that the session workload is currently obtained using a constant Session Timeout Threshold (STT) value. However, each website is unique and should have its own STT value. Hence, an initial model for estimating the STT is introduced to encourage future research on sessions to use a customized STT value per website. This research on the STT leads to a deeper investigation of the actual session workload unit. More specifically, the distributional properties of the session workload are re-examined to determine whether the session workload can be described as a heavy-tailed distribution. / Software Engineering and Intelligent Systems
|
14 |
Apsaugos nuo SQL injekcijų el.verslo svetainėse metodikos sudarymas ir tyrimas / Development and research of method of protection against SQL injections in e-commerce websitesRamoška, Aidas 04 November 2013 (has links)
SQL injekcijos atakos taikinys – interaktyvios interneto programos, kurios naudoja duomenų bazės serverius. Šios programos leidžia vartotojams įvesti informaciją ir ją įvedus formuojamos SQL užklausos, kurios siunčiamos į duomenų bazės serverį. Darydamas SQL injekcijos ataką, atakuotojas per įvesties laukus suformuoja kenksmingą SQL užklausos segmentą, kuris modifikuoja buvusią užklausą. Naudodamas SQL injekcijos ataką, atakuotojas gali prieiti prie konfidencialios informacijos, ją modifikuoti ar, apeidamas autorizacijos scenarijų, prisijungti prie sistemos nežinodamas slaptažodžio. Šiame darbe pasiūlytas saugos modulis perima visą vartotojo įvedamą informaciją, pritaiko saugumo taisykles ir taip padidina saugumą apsisaugant nuo SQL injekcijų el. verslo žiniatinklio programose bei registruoja potencialius bandymus sutrikdyti normalų sistemos darbą. Norint įdiegti pasiūlytą saugos modulį, nereikia konfigūruoti serverio ar jo programinės įrangos – modulio diegimo metu keičiasi tik žiniatinklio programos failai. Darbui atlikti pasirinkta PHP programavimo kalba ir MySQL duomenų bazė. Tyrimo metu atlikti testavimo rezultatai parodo, kokius saugos modulio konfigūravimo parametrus reikia taikyti norint užtikrinti maksimalų saugumo lygį. / The target of SQL injection attack – interactive web programs, which use database servers. Those programs allow users to input information and as it is imputed, it forms SQL queries, which are sent into database server. With SQL injection help, the attacker using input fields forms harmful section of SQL query, which modifies previous query. Exploiting attack of SQL injection, the attacker may learn confidential information, modify it or connect to system without knowing the password by authorisation bypass. In this research-paper the proposed security model takes over all information inputted by user, adjusts the safety rules and that way it improves the safety in order to guard from SQL injections at electronic business web systems as well as it register potential attempts to disrupt normal work of the system. In order to install the proposed safety model there is no need to configure the server or its software because in the moment of installation it changes only files of website programs. For purpose of executing this work, we use PHP programming language and MySQL database. During the analysis, the received test results show what configuration parameters of safety model we need to use in order to guarantee the maximum level of safety.
|
15 |
Empirically Driven Investigation of Dependability and Security Issues in Internet-Centric SystemsHuynh, Toan Nguyen Duc Unknown Date
No description available.
|
16 |
Penetration Testing in a Web Application EnvironmentVernersson, Susanne January 2010 (has links)
As the use of web applications is increasing among a number of different industries, many companies turn to online applications to promote their services. Companies see the great advantages with web applications such as convenience, low costs and little need of additional hardware or software configuration. Meanwhile, the threats against web applications are scaling up where the attacker is not in need of much experience or knowledge to hack a poorly secured web application as the service easily can be accessed over the Internet. While common attacks such as cross-site scripting and SQL injection are still around and very much in use since a number of years, the hacker community constantly discovers new exploits making businesses in need of higher security. Penetration testing is a method used to estimate the security of a computer system, network or web application. The aim is to reveal possible vulnerabilities that could be exploited by a malicious attacker and suggest solutions to the given problem at hand. With the right security fixes, a business system can go from being a threat to its users’ sensitive data to a secure and functional platform with just a few adjustments. This thesis aims to help the IT security consultants at Combitech AB with detecting and securing the most common web application exploits that companies suffer from today. By providing Combitech with safe and easy methods to discover and fix the top security deficiencies, the restricted time spent at a client due to budget concerns can be made more efficient thanks to improvements in the internal testing methodology. The project can additionally be of interest to teachers, students and developers who want to know more about web application testing and security as well as common exploit scenarios.
|
17 |
Web-Based Intrusion Detection SystemAdemi, Muhamet January 2013 (has links)
Web applications are growing rapidly and as the amount of web sites globallyincreases so do security threats. Complex applications often interact with thirdparty services and databases to fetch information and often interactions requireuser input. Intruders are targeting web applications specifically and they are ahuge security threat to organizations and a way to combat this is to haveintrusion detection systems. Most common web attack methods are wellresearched and documented however due to time constraints developers oftenwrite applications fast and may not implement the best security practices. Thisreport describes one way to implement a intrusion detection system thatspecifically detects web based attacks.
|
18 |
Anomaly Detection From Personal Usage Patterns In Web ApplicationsVural, Gurkan 01 December 2006 (has links) (PDF)
The anomaly detection task is to recognize the presence of an unusual (and potentially hazardous) state within the behaviors or activities of a computer user, system, or network with respect to some model of normal behavior which may be either hard-coded or learned from observation. An anomaly detection agent faces many learning problems including learning from streams of temporal data, learning from instances of a single class, and adaptation to a dynamically changing concept. The domain is complicated by considerations of the trusted insider problem (recognizing the difference between innocuous and malicious behavior changes on the part of a trusted user).
This study introduces the anomaly detection in web applications and formulates it as a machine learning task on temporal sequence data. In this study the goal is to develop a model or profile of normal working state of web application user and to detect anomalous conditions as deviations from the expected behavior patterns. We focus, here, on learning models of normality at the user behavioral level, as observed through a web application. In this study we introduce some sensors intended to function as a focus of attention unit at the lowest level of a classification hierarchy using Finite State Markov Chains and Hidden Markov Models and discuss the success of these sensors.
|
19 |
Μέθοδοι προστασίας ιστοσελίδων στο διαδίκτυοΜπαλαφούτης, Χρήστος 19 October 2012 (has links)
Στην παρούσα διπλωματική εργασία παρουσιάζονται βασικές έννοιες και μέθοδοι για την ασφάλεια ιστοσελίδων και ιδιαίτερα των site με web application προσανατολισμό, χωρίς αυτό να σημαίνει ότι αρκετές τεχνικές προστασίας και σφάλματα που θα εντοπίσουμε δεν μπορούν να συναντηθούν και σε άλλου σκοπού ιστοσελίδες. Αρχικά, γίνεται αναφορά στο τι είναι μια εφαρμογή ιστού (web app) και ποια είναι τα στοιχεία που την αποτελούν. Στη συνέχεια, χρησιμοποιώντας έρευνες, παρουσιάζονται κάποιες από τις πιο “δημοφιλείς” επιθέσεις που γίνονται σε ιστοσελίδες και περιγράφεται πιο διεξοδικά ποια αδύνατα σημεία της δομής των ιστοσελίδων εκμεταλλεύονται. Παράλληλα, γίνεται αναφορά στο πως και με ποια εργαλεία μπορούμε να εντοπίσουμε και να κλείσουμε τα κενά ασφαλείας που τυχόν έχει μία εφαρμογή ιστού. Τέλος, παρουσιάζεται η εφαρμογή που αναπτύχθηκε στα πλαίσια της εργασίας με σκοπό να γίνει επίδειξη συγκεκριμένων επιθέσεων και σφαλμάτων που παρατηρούνται στο διαδίκτυο. / In the following pages basic principals and methods are presented in order to secure websites and web applications. I begin by mentioning what is a web application. Moreover, by using statistics and recent researches from various sources i mention the most common web app attack methods and which vulnerabilities can be found in a web app and how to prevent exploiting, something we can accomplish by using various penetration testing tools. Finally, by using a basic web app some web attacks are shown so that it will become more clear how these attacks work.
|
20 |
Detecção de Cross-Site Scripting em páginas WebNunan, Angelo Eduardo 14 May 2012 (has links)
Made available in DSpace on 2015-04-11T14:03:18Z (GMT). No. of bitstreams: 1
Angelo Eduardo Nunan.pdf: 2892243 bytes, checksum: 5653024cae1270242c7b4f8228cf0d2c (MD5)
Previous issue date: 2012-05-14 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Web applications are currently an important environment for access to services available on the Internet. However, the security assurance of these resources has become an elementary task. The structure of dynamic websites composed by a set of objects such as HTML tags, script functions, hyperlinks and advanced features in web browsers may provide numerous resources and interactive services, for instance e-commerce, Internet banking, social networking, blogs, forums, among
others. On the other hand, these features helped to increase the potential security risks and attacks, which are the results of malicious codes injection. In this context, Cross-Site Scripting (XSS) is highlighted at the top of the lists of the greatest threats to web applications in recent years. This work presents a method based on supervised machine learning techniques to detect XSS in web pages. A set of features extracted from URL contents and web document are employed in order to discriminate XSS patterns and to successfully classify both malicious and non-malicious pages / As aplicações web atualmente representam um importante ambiente de acesso aos serviços oferecidos na Internet. Garantir a segurança desses recursos se tornou uma tarefa elementar. A estrutura de sites dinâmicos constituída por um conjunto de objetos, tais como tags de HTML, funções de script, hiperlinks e recursos avançados em navegadores web levou a inúmeras funcionalidades e à interatividade de serviços, tais como e-commerce, Internet banking, redes sociais, blogs, fóruns, entre outros. No entanto, esses recursos têm aumentado potencialmente os riscos de segurança e os ataques resultantes da injeção de códigos maliciosos, onde o Cross-Site
Scripting aparece em destaque, no topo das listas das maiores ameaças para aplicações web nos últimos anos. Este trabalho apresenta um método baseado em técnicas de aprendizagem de máquina supervisionada para detectar XSS em páginas web, a partir de um conjunto de características extraídas da URL e do documento web, capazes de discriminar padrões de ataques XSS e distinguir páginas web maliciosas das páginas web normais ou benignas
|
Page generated in 0.184 seconds