• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 7
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Measuring Accurancy of Vulnerability Scanners : An Evaluation with SQL Injections / Mätning av noggrannhet bland sårbarhetsskannrar : En utvärdering med SQL injektioner

Norström, Alexander January 2014 (has links)
Web application vulnerabilities of critical are commonly found in web applications. The arguably most problematic class of web application vulnerabilities is SQL injections. SQL injection vulnerabilities can be used to execute commands on the database coupled to the web application, e.g., to extract the web application’s user and passwords data. Black box testing tools are often used (both by system owners and their adversaries) to discover vul- nerabilities in a running web application. Hence, how well they perform at discovering SQL injection vulnerabilities is of importance. This thesis describes an experiment assessing de- tection capability for different SQL injection vulnerabilities under different conditions. In the experiment the following is varied: SQL injection vulnerability (17 instances allowing tautologies, piggy-backed queries, and logically incorrect queries), scanners (four products), exploitability (three levels), input vector (POST/GET), and time investment (three levels). The number of vulnerabilities detected is largely determined by the choice of scanner (30% to 77%) and the input vector (71% or 38%). The interaction between the scanner and input vector is substantial since two scanners cannot handle the POST-vector at all. Substantial differences are also found between how well different SQL injection vulnerabilities are de- tected and the more exploitable variants are detected more often, as expected. The impact of time spent with the scan interacts with the scanner - some scanners required considerable time to configure and other did not – and as a consequence the relationship between time investments to detection capabilities is non-trivial.
2

Static Vulnerability Analysis of Docker Images

Henriksson, Oscar, Falk, Michael January 2017 (has links)
Docker is a popular tool for virtualization that allows for fast and easy deployment of applications and has been growing increasingly popular among companies. Docker also include a large library of images from the repository Docker Hub which mainly is user created and uncontrolled. This leads to low frequency of updates which results in vulnerabilities in the images. In this thesis we are developing a tool for determining what vulnerabilities that exists inside Docker images with a Linux distribution. This is done by using our own tool for downloading and retrieving the necessary data from the images and then utilizing Outpost24's scanner for finding vulnerabilities in Linux packages. With the help of this tool we also publish statistics of vulnerabilities from the top downloaded images of Docker Hub. The result is a tool that can successfully scan a Docker image for vulnerabilities in certain Linux distributions. From a survey over the top 1000 Docker images it has also been shown that the amount of vulnerabilities have increased in comparison to earlier surveys of Docker images.
3

Supplementing Dependabot’svulnerability scanning : A Custom Pipeline for Tracing DependencyUsage in JavaScript Projects

Karlsson, Isak, Ljungberg, David January 2024 (has links)
Software systems are becoming increasingly complex, with developers frequentlyutilizing numerous dependencies. In this landscape, accurate tracking and understanding of dependencies within JavaScript and TypeScript codebases are vital formaintaining software security and quality. However, there exists a gap in how existing vulnerability scanning tools, such as Dependabot, convey information aboutthe usage of these dependencies. This study addresses the problem of providing amore comprehensive dependency usage overview, a topic critical to aiding developers in securing their software systems. To bridge this gap, a custom pipeline wasimplemented to supplement Dependabot, extracting the dependencies identified asvulnerable and providing specific information about their usage within a repository.The results highlight the pros and cons of this approach, showing an improvement inthe understanding of dependency usage. The effort opens a pathway towards moresecure software systems.
4

Recommender system for IT security scanning service : Collaborative filtering in an error report scenario / Rekommendationssystem för IT-säkerhetsscanner : Kollaborativ filtrering för risk-rapporter

Thunberg, Jonas January 2022 (has links)
Recommender systems have become an integral part of the user interface of many web applications. Recommending items to buy, media to view or similar “next choice”-recommendations has proven to be a powerful tool to improve costumer experience and engagement. One common technique to produce recommendations called Collaborative Filtering makes use of the unsupervised Nearest Neighbor-algorithm, where a costumers historic use of a service is encoded as a vector and recommendations are made such that if followed the resulting behaviour-vector would lie closer to the nearest neighboring vectors encoding other costumers. This thesis describes the adaptation of a Collaborative Filtering recommender system to a cyber security vulnerability report setting with the goal of producing recommendations regarding which of a set of found vulnerabilities to prioritize for mitigation. Such an error report scenario presents idiosyncrasies that do not allow a direct application of common recommender system algorithms. This work was carried out in collaboration with the company Detectify, whose product allows users to check for vulnerabilities in their internet facing software, typically web pages and apps. The finding mitigation priorities of historic customers have to be inferred from differences in their consecutive reports, i.e. from noisy vector valued signals. Further, as opposed to the typical e-commerce or media streaming scenario, as a user can not freely choose which item to increase their consumption of, instead, a user can only attempt to decrease their inventory of a limited subset (the vulnerabilities in their report) of all items (all possible vulnerabilities). This thesis presents an adapted Collaborative Filtering algorithm applicable to this scenario. The chosen approach to the algorithm is motivated by an extensive literature review of the current state of the art of recommender systems. To measure the performance of the algorithm, test data is produced which allows for comparison between recommendations based on noisy data and the actual change in a noiseless version. The results that are showcased give reference values as to under what levels of noise and data sparsity the developed algorithm can be expected to produce recommendations that align well with historic behavioural patterns of other customers. This thesis thus provides a novel variation of the Collaborative Filtering algorithm that extends its usability to a scenario that has not been previously addressed in the reviewed literature. / Rekommendationssystem är idag en självklar del av manga användargränssnitt. Exempel på dessa som många av oss interagerar med dagligen är system som föreslår nästa ord när vi skriver, nästa produkt när vi handlar online eller nästa media när vi använder streaming-tjänster. En vanlig teknik för att producera rekommendationer är Collaborativer Filtering, vilken använder Nearest Neighbor-algoritmer för att rekommendera så att en användares historik (beskriven som en vektor) förflyttas närmre de närmaste grannarna om rekommendationen följs. I denna uppsats redovisas en anpassning av ett Collaborative Filtering-rekommendationssystem för användning i samband med skanning efter it-säkerhetsrisker, med målet att producera rekommendationer rörande vilken säkerhetsrisk som bör prioriteras för åtgärd. Ett sådant error report scenario (riskrapport-scenario) för med sig vissa skillnader jämfört med ett e-handel/streaming-scenario som gör det nödvändigt att anpassa de typiska Collaboritve Filtering-systemet innan det är applicerbart. Det här arbetet utförs i samarbete med företaget Detectify, som tillhandahåller en produkt med vilken användare kan upptäcka säkerhetsrisker i deras internet-kopplade mjukvara (exempelvis hemsidor och web-applikationer). Historiska prioriteringar rörande åtgärdande av säkerhetsrisker måste beräknas ut tidigare användares rapporter om hunna risker, alltså från brusiga vektor-värda signaler. En användare kan inte heller fritt välja att öka sin konsumption av någon produkt i ett sortiment, utan istället måste en rekommendation röra vilket objekt i en användares befintliga innehav (de funna riskerna i deras senaste rapport) som användaren bör försöka minska antalet av. I den här uppsatsen presenteras ett Collaborative Filtering-rekommendationssystem anpassat till detta scenario. Algoritmen motiveras med en extensiv litteraturstudie av relevant litteratur och utvärderas med syntetisk data vilket möjliggör undersökning av hur olika nivåer av brus och gleshet (sparsity) inverkar på rekommendationerna. Resultaten som presenteras tillhandahåller referensnivåer för under vilken grad av brus och gleshet algoritmen kan förväntas prestera väl. Sammanfattningsvis utvecklas, utvärderas och presenteras en modifikation av Cillaborative Filtering-rekommendationssystem som möjliggör dessa användade i ett scenario som ej beskrivs i den genomgångna litteraturen.
5

A Vulnerability Assessment Approach for Home Networks : A case of Cameroon

Tanyi, Elvis Etengeneng January 2023 (has links)
The research highlights the importance of vulnerability assessment in evaluating the effectiveness of security mechanisms in computer and network systems. While vulnerability assessment is commonly practiced by companies and businesses, it is often underlooked in the context of home networks. The misconception that home networks are not lucrative targets for cyber criminals has been shattered with the emergence of the Covid-19 pandemic, which forced many individuals to work from home, adding to their normal daily personal home network device interactions, making their home networks more vulnerable to attacks. The situation iseven more challenging in developing countries like Cameroon, where there is a significant IT gap due to limited access to quality IT education and training opportunities. To address these issues, the research employed two main methods. Firstly, the systematic Review of Literature (SRL) method was used to investigate the types of systems used in home networks, common vulnerabilities, and attacks associated with them. Additionally, a step-by-step guide using opensource tools was developed to assist home users in evaluating the security of their networks. The second method utilized was the experimental method, with the use of semi-structured interviews for data collection. This demonstrated how selected tools such as Zenmap and Nessus, along with associated techniques, could be effectively used by home users to assess the security posture of their networks. This practical approach contributed to the development of a targeted vulnerability assessment methodology for home users. Furthermore, recommendations were provided to help home users mitigate identified vulnerabilities in their networks.
6

Detection of Vulnerability Scanning Attacks using Machine Learning : Application Layer Intrusion Detection and Prevention by Combining Machine Learning and AppSensor Concepts / Detektering av sårbarhetsscanning med maskininlärning : Detektering och förhindrande av attacker i applikationslagret genom kombinationen av maskininlärning och AppSensor koncept

Shahrivar, Pojan January 2022 (has links)
It is well-established that machine learning techniques have been used with great success in other domains and has been leveraged to deal with sources of evolving abuse, such as spam. This study aims to determine whether machine learning techniques can be used to create a model that detects vulnerability scanning attacks using proprietary real-world data collected from tCell, a web application firewall. In this context, a vulnerability scanning attack is defined as an automated process that detects and classifies security weaknesses and flaws in the web application. To test the hypothesis that machine learning techniques can be used to create a detection model, twenty four models were trained. The models showed a high level of precision and recall, ranging from 91% to 0.96% and 85% to 0.93%, respectively. Although the classification performance was strong, the models were not calibrated sufficiently which resulted in an underconfidence in the predictions. The results can therefore been viewed as a performance baseline. Nevertheless, the results demonstrate an advancement over the simplistic threshold-based techniques developed in the early days of the internet, but require further research and development to tune and calibrate the models. / Det är väletablerat att tekniker för maskininlärning har använts med stor framgång inom andra domäner och har utnyttjats för att hantera källor till växande missbruk, såsom spam. Denna studie syftar till att avgöra om maskininlärningstekniker kan tillämpas för att skapa en modell som upptäcker sårbarhets-skanningsattacker med hjälp av proprietär data som samlats in från tCell, en webbapplikationsbrandvägg. I detta sammanhang definieras en sårbarhetsskanningsattack som en automatiserad process som upptäcker och klassificerar säkerhetsbrister och brister i webb-applikationen. För att testa hypotesen att maskininlärningstekniker kan användas för att skapa en detektionsmodell, tränades tjugofyra modeller. Modellerna visade en hög nivå av precision och sensitivitet, från 91% till 0,96% och 85% till 0,93%, respektive. Även om klassificeringsprestandan var god, var modellerna inte tillräckligt kalibrerade, vilket resulterade i ett svagt förtoende för förutsägelserna. De presenterade resultaten kan därför ses som en prestationsbaslinje. Resultaten visar ett framsteg över de förenklade tröskelbaserade teknikerna som utvecklades i begynnelsen av internet, men kräver ytterligare forskning och utveckling för att kalibrera modellerna.
7

Students’ Perception of Cyber Threat Severity : Investigating Alignment with Actual Risk Levels

Erfani Torbaghani, Ramtin January 2023 (has links)
This study aims to investigate the alignment between students’ perception of cyber threats and their actual risk levels. A mixed-method approach was used, where data was collected from Swedish university students through questionnaires, capturing their perception, familiarity, experience, and protective behaviors. Information regarding the actual risk levels of cyber attacks was obtained from interviews with cyber security professionals and other expert sources, such as cyber security reports. The results showed that students perceive malware, ransomware, phishing, and insecure passwords as the most dangerous threats to society, while denial of service (DoS) attacks and packet sniffing were considered less severe. These findings align somewhat with the suggested threat levels. However, notable proportions of students perceived these threats as moderately dangerous or less severe, suggesting room for improvement in their understanding. The results also showed that protective behaviors among students are generally low, particularly in regards to IoT security. Future work should therefore explore the public’s perception, protective behavior and knowledge of IoT security, but also attacks that are common against such devices. / Denna studie jämför universitetsstudenters uppfattning om hur farliga olika cyberhot är med de faktiska risknivåerna för dessa hot. Data på studenternas uppfattning, bekantskap, erfarenhet och beteenden samlades in genom frågeformulär, medans information om cyberhotens faktiska risknivåer inhämtades från intervjuer med cybersäkerhetsproffs och andra experskällor som cybersäkerhetsrapporter och artiklar. Resultaten visade att studenterna uppfattar malware, ransomware, phishing och osäkra lösenord som de farligaste hoten mot samhället, medan denial of service (DoS)-attacker och packet sniffing ansågs vara mindre allvarliga. Dessa fynd överensstämde något med de föreslagna risknivåerna. Dock ansåg en anmärkningsvärd andel av studenterna dessa hot som måttligt farliga eller mindre allvarliga, vilket tyder på utrymme för förbättringar i deras förståelse. Resultaten visade också att skyddande beteenden bland studenter generellt är låga, särskilt när det gäller IoT-säkerhet. Framtida studier bör därför utforska allmänhetens uppfattning, skyddsbeteende och kunskap om IoT-säkerhet, men även attacker som är vanliga mot sådana enheter.

Page generated in 0.4718 seconds