1 |
Verktyg för säker kodning : En jämförande studie / Tools for secure coding : A comparative studyFransson, Robin, Hiltunen, Tommi January 2023 (has links)
Bakgrund I dagens programvara finns det problem som försämrar kvaliteten hos system och ökar kostnaderna. Det är viktigt att tänka på säkerheten redan under programmeringsfasen för att underlätta underhåll. The Open Web Application Security Project (OWASP) erbjuder dokument, verktyg och projekt för att skapa och underhålla produkter på ett säkrare sätt. För att upptäcka säkerhetsproblem i koden kan verktyg för Static Application Security Testing (SAST) användas. SAST-verktyg kan rapportera både false negatives och false positives, därför är det viktigt att undersöka hur precisa verktygen är i sin rapportering. Syfte Studien ämnar kartlägga vilka SAST-verktyg utvecklare kan ta hjälp av för att skriva säkrare kod. Undersökningen skall även jämföra hur bra de är på att hitta sårbarheter i kod och hur stort antal false positives de rapporterar. Metod En sökning gjordes för att samla information om vilka SAST-verktyg som finns tillgängliga och en lista sammanställdes med krav för att kunna genomföra likvärdiga tester. För att utföra testerna användes kod med planterade sårbarheter och resultaten från testerna genererade kvantitativa data som fördes in i en tabell. Resultat I studiens resultat kartlades tolv SAST-verktyg. Från dessa valdes HCL AppScan CodeSweep, Snyk och SonarLint ut för vidare testning. Därefter beräknades recall, precision och false positives för verktygen. Snyk hade 71,43% på både recall och precision och 33,33% false positives. HCL AppScan CodeSweep hade 28,57% på recall, 57,14% på precision och 25% på false positives. SonarLint hittade inga sårbarheter och blev därav inte analyserat. Slutsatser Studien kartlade tolv olika SAST-verktyg och valde tre för likvärdiga tester av JavaScript i Visual Studio Code. Resultaten visade att Snyk presterade bäst gällande rapportering av sårbarheter och hade högre resultat gällande precision, medan HCL AppScan CodeSweep presterade bäst på att undvika false positives. Överlag anses Snyk vara studiens bästa SAST-verktyg då det hade högst resultat på både recall och precision. / Background In today's software, there are issues that degrade system quality and increase costs. It is important to consider security during the programming phase to facilitate maintenance. The Open Web Application Security Project (OWASP) provides documentation, tools, and projects to create and maintain products in a more secure manner. To detect security issues in the code, tools for Static Application Security Testing (SAST) can be used. SAST-tools can report both false negatives and false positives, so it is important to investigate the accuracy of the tools in their reporting. Aim The study aims to map which SAST-tools developers can utilize to write more secure code. The investigation will also compare their effectiveness inidentifying vulnerabilities in code and the numberof false positives they report. Method A search was conducted to gather information on available SAST-tools, and a list was compiled with requirements to perform equivalent tests. To carry out the tests, code with planted vulnerabilities was used, and the test results generated quantitative data that were entered into a table. Results The study's results mapped twelve SAST-tools. From these, HCL AppScan CodeSweep, Snyk, and SonarLint were selected for further testing. Then, the recall, precision, and false positives were calculated for the tools. Snyk achieved 71.43% for both recall and precision and had 33.33% false positives. HCL AppScan CodeSweep achieved 28.57% recall, 57.14% precision, and 25% false positives. SonarLint did not find any vulnerabilities and was therefore not analyzed. Conclusions The study surveyed twelve different SAST-tools and selected three for tests on JavaScript in Visual Studio Code. The results showed that Snyk performed the best in terms of vulnerability reporting and achieved higher precision results, while HCL AppScan CodeSweep excelled in avoiding false positives. Overall, Snyk is considered the best SAST-tool in the study as it had the highest results in both recall and precision.
|
2 |
Environmentally aware vulnerability prioritisation within large networks : A proposed novel methodLenander, Marcus, Tigerström, Jakob January 2022 (has links)
Background. Software vulnerabilities are a constant threat to organisations, businesses, and individuals. Keeping all devices patched from security software vulnerabilities is complex and time-consuming. Companies must use resources efficiently to ensure that the most severe security vulnerability is prioritised first. Today’s state-of-the-art prioritisation method only relies on the severity of the vulnerability without its environmental context. We propose a novel method that automatically prioritises the vulnerabilities in a device based on its environmental information, such as role and criticality. Objectives. This thesis aims to analyse to what extent vulnerabilities can be prioritised based on the environmental information of the device. Furthermore, we investigate the possibility of automatically estimating the role and criticality of a device and to what extent they can more accurately reflect the severity of the vulnerabilities present in the device. Methods. The proposed novel method uses environmental information found by a vulnerability scanner. Based on this information, the method estimates the role of the device. The role is then used by the method to estimate the criticality of the device. Based on the criticality and environmental information, a new vulnerability score is calculated for each vulnerability, and the list is reprioritised based on the latest score. We further apply an experimental study to analyse the assessment of the method against experts' assessment. Results. The experimental study indicates that the method performs slightly better than the state-of-the-art method. The proposed novel method estimated the primary role with an accuracy of 100% and the secondary role with an accuracy of 71.4%. The method's criticality assessment has a moderate agreement with the experts' criticality assessment. Overall, the method's reprioritised vulnerability lists correlate almost perfectly with the experts' vulnerability lists. Conclusions. Considering the environmental information during the prioritisation of vulnerabilities is beneficial. We find that our method performs slightly better than the state-of-the-art method. The proposed method needs further improvements to give a better criticality estimation. However, more research is required to claim that system administrators could benefit from using the proposed method when prioritising vulnerabilities. / Bakgrund. Sårbarheter i programvara är ett konstant hot mot organisationer och företag såväl som till privatpersoner. Att se till att enheterna är säkra är en komplex och tidskrävande uppgift. Det är därför viktigt att prioritera den tiden som finns dit där den gör mest nytta, det vill säga att åtgärda den allvarligaste sårbarheten först. Den allra bästa sårbarheter prioriterings metoden baseras på allvarlighetsgraden utan att ta hänsyn till sårbarhetens miljömetrik. Därav föreslår vi en ny prioriterings metod som automatiskt prioriterar sårbarheterna baserat på en enhets miljömetrik så som roll och kritikalitet. Syfte. Syftet med detta arbetet är att avgöra i vilken utsträckning det går att prioritera sårbarheter baserat på des miljömetrik. Utöver detta ska vi även undersöka huruvida man kan automatiskt uppskatta en enhets roll och kritikalitet för att bättre reflektera sårbarhetens allvarlighetsgrad. Metod. Den föreslagna metoden använder sig av sammanhangs information som tillhandahålls av en sårbarhets scanner. Utifrån denna information kommer enhetens roll att uppskattas. Den estimerade rollen kommer då användas av metoden för att bestämma enhetens kritikalitet. Baserat på kritikaliteten och sammanhangs informationen kommer en ny allvarlighetsgrad beräknas för all sårbarheter. Listan av sårbarheter kommer omprioriteras med hänsyn till de senast beräknade allvarlighetsgraderna. Ett experiment utförs sedan för att analysera huruvida bra den nya prioriterings metoden är och för att validera resultatet kommer det jämföras mot experters prioritering. Resultat. Den experimentella studien indikerar på att vår metod presterar lite bättre än den den allra bästa sårbarheter prioriterings metoden. Den föreslagna metoden kan uppskatta den primära rollen med en träffsäkerhet på 100% och sekundära rollen med 71.4% träffsäkerhet. Metodens uppskattning av kritikaliteten är måttlig överensstämmande med den av experternas uppskattning. Överlag korrelerar metodens prioritiseringlista bättre med experternas än vad den allra senaste prioritiserings metoden gör. Slutsats. Genom att ta hänsyn till en enhets miljömetrik vid beräkningen av sårbarhetens allvarlighetsgrad får man ett bättre resultat än om den inte skulle varit med i beräkningen. Vi ser att vår metod fungerar bättre över lag än av den allra senaste prioritiserings metoden gör. Den föreslagna metoden behöver forskas mer på för att säkert kunna säga att den är användbar.
|
3 |
Secure VoIP performance measurementSaad, Amna January 2013 (has links)
This project presents a mechanism for instrumentation of secure VoIP calls. The experiments were run under different network conditions and security systems. VoIP services such as Google Talk, Express Talk and Skype were under test. The project allowed analysis of the voice quality of the VoIP services based on the Mean Opinion Score (MOS) values generated by Perceptual valuation of Speech Quality (PESQ). The quality of the audio streams produced were subjected to end-to-end delay, jitter, packet loss and extra processing in the networking hardware and end devices due to Internetworking Layer security or Transport Layer security implementations. The MOS values were mapped to Perceptual Evaluation of Speech Quality for wideband (PESQ-WB) scores. From these PESQ-WB scores, the graphs of the mean of 10 runs and box and whisker plots for each parameter were drawn. Analysis on the graphs was performed in order to deduce the quality of each VoIP service. The E-model was used to predict the network readiness and Common vulnerability Scoring System (CVSS) was used to predict the network vulnerabilities. The project also provided the mechanism to measure the throughput for each test case. The overall performance of each VoIP service was determined by PESQ-WB scores, CVSS scores and the throughput. The experiment demonstrated the relationship among VoIP performance, VoIP security and VoIP service type. The experiment also suggested that, when compared to an unsecure IPIP tunnel, Internetworking Layer security like IPSec ESP or Transport Layer security like OpenVPN TLS would improve a VoIP security by reducing the vulnerabilities of the media part of the VoIP signal. Morever, adding a security layer has little impact on the VoIP voice quality.
|
4 |
A quantitative security assessment of modern cyber attacks : a framework for quantifying enterprise security risk level through system's vulnerability analysis by detecting known and unknown threatsMunir, Rashid January 2014 (has links)
Cisco 2014 Annual Security Report clearly outlines the evolution of the threat landscape and the increase of the number of attacks. The UK government in 2012 recognised the cyber threat as Tier-1 threat since about 50 government departments have been either subjected to an attack or a direct threat from an attack. The cyberspace has become the platform of choice for businesses, schools, universities, colleges, hospitals and other sectors for business activities. One of the major problems identified by the Department of Homeland Security is the lack of clear security metrics. The recent cyber security breach of the US retail giant TARGET is a typical example that demonstrates the weaknesses of qualitative security, also considered by some security experts as fuzzy security. High, medium or low as measures of security levels do not give a quantitative representation of the network security level of a company. In this thesis, a method is developed to quantify the security risk level of known and unknown attacks in an enterprise network in an effort to solve this problem. The identified vulnerabilities in a case study of a UK based company are classified according to their severity risk levels using common vulnerability scoring system (CVSS) and open web application security project (OWASP). Probability theory is applied against known attacks to create the security metrics and, detection and prevention method is suggested for company network against unknown attacks. Our security metrics are clear and repeatable that can be verified scientifically.
|
5 |
A Quantitative Security Assessment of Modern Cyber Attacks. A Framework for Quantifying Enterprise Security Risk Level Through System's Vulnerability Analysis by Detecting Known and Unknown ThreatsMunir, Rashid January 2014 (has links)
Cisco 2014 Annual Security Report clearly outlines the evolution of the threat landscape and the increase of the number of attacks. The UK government in 2012 recognised the cyber threat as Tier-1 threat since about 50 government departments have been either subjected to an attack or a direct threat from an attack. The cyberspace has become the platform of choice for businesses, schools, universities, colleges, hospitals and other sectors for business activities. One of the major problems identified by the Department of Homeland Security is the lack of clear security metrics. The recent cyber security breach of the US retail giant TARGET is a typical example that demonstrates the weaknesses of qualitative security, also considered by some security experts as fuzzy security. High, medium or low as measures of security levels do not give a quantitative representation of the network security level of a company. In this thesis, a method is developed to quantify the security risk level of known and unknown attacks in an enterprise network in an effort to solve this problem. The identified vulnerabilities in a case study of a UK based company are classified according to their severity risk levels using common vulnerability scoring system (CVSS) and open web application security project (OWASP). Probability theory is applied against known attacks to create the security metrics and, detection and prevention method is suggested for company network against unknown attacks. Our security metrics are clear and repeatable that can be verified scientifically
|
6 |
[pt] CONSTRUÇÕES VERBAIS SERIADAS: UMA CARACTERIZAÇÃO INTERMODAL / [en] SERIAL VERBS CONSTRUCTIONS: A CROSS-MODALITY CHARACTERIZATIONISAAC GOMES MORAES DE SOUZA 21 December 2023 (has links)
[pt] As CVSs têm sido amplamente descritas em línguas orais e são
produtivas em línguas de sinais. Elas se caracterizam como estruturas
multiverbais sem elemento coordenador manifesto, apresentando
compartilhamento de marcadores funcionais e de argumentos interno e
externo, semântica de evento único e prosódia monossentencial. O objetivo
deste trabalho é apresentar uma caracterização a partir de dados
translinguísticos e intermodais, sugerindo uma análise formal para o
fenômeno com base em uma caracterização pioneira dessas construções
em Libras. Duas tarefas de aceitabilidade gramatical, utilizando a técnica
playback, foram conduzidas com a participação de surdos nativos de
Libras, abordando sequências verbais seriadas simétricas e assimétricas.
Essa metodologia permitiu a obtenção de dados robustos sobre a estrutura
e o uso das CVSs em Libras. As observações empíricas indicam,
primeiramente, que as sentenças com empilhamento verbal em Libras são
distintas em termos semânticos e sintáticos quando comparadas às
sentenças com coordenação, tanto a coordenada explícita quanto a
encoberta. Além disso, as CVSs em Libras demonstraram ser produtivas e
apresentaram restrições semelhantes às observadas na literatura para
línguas orais e de sinais. Identificou-se também a produtividade das CVSs-sanduíches em Libras, que, apesar de compartilhar algumas semelhanças
com CVSs em outras línguas de sinais, comportam-se de maneira distinta,
funcionando como estruturas de foco com reduplicação verbal.
Adicionalmente, foram observadas as sequências de verbos AB com
mudança de perspectiva, embora sejam menos produtivas. Estas se
distanciam de estruturas passivas convencionais, assemelhando-se mais a
predicados complexos. Com base na literatura sobre o fenômeno e nos
dados obtidos em Libras, a análise teórica adotada sugere que as CVSs
em Libras envolvem a gramaticalização de um dos componentes verbais
seriados, atuando como marcador de aspecto e sendo incorporado na
estrutura como um elemento periférico à estrutura argumental projetada
pelo verbo não gramaticalizado. Este estudo oferece uma contribuição
significativa para o entendimento das CVSs em línguas de sinais,
demonstrando a complexidade intrínseca da estrutura linguística em Libras.
Além disso, abre perspectivas para futuras pesquisas na área da linguística
de línguas de sinais e para uma caracterização mais robusta das CVSs nas
línguas naturais. / [en] Serial verb constructions (SVCs) have been extensively described in
oral languages and are also productive in sign languages. These are
characterized as multi-verb sequences without manifestation of a
coordinator or subordinatior element. These sequences share functional
marker related to tense, aspect and negation, and the external and the
internal arguments. They denote a single event and have monosentential
prosody. The aim of this work is to present a characterization of SVCS,
based on crosslinguistic and intermodal data, proposing a formal analysis
for the phenomenon built upon first-hand data from Libras. Two
grammaticality judgment tasks using the playback technique were
conducted with the participation of native Libras signers, addressing both
symmetric and asymmetric SVCs. Our observations indicate, firstly, that
multiverb sequences in Libras are distinct in semantic and syntactic terms
when compared to overt and covert coordinated sentences. SVCs proved
to be productive in Libras and exhibited restrictions like those documented
in the literature for oral languages and other sign languages. Sandwiched
SVCs are also productive in Libras, but despite sharing some similarities
with SVCs in other sign languages, behave differently, functioning as focus
structures with verbal reduplication. Sequences of AB verbs with change of
perspective, while less productive, were also observed. These contrast with
conventional passive structures, resembling more complex predicate
structures. Based on the theoretical and typological literature and on the
data collected in Libras, we adopted a syntactic analysis in which SVCs
involve grammaticalization of one of the verbs sequences. This
grammaticalized form serves as an aspect marker, heading an aspect
projection at the left periphery of the argument structure projected by the
non-grammaticalized verb. This study offers a significant contribution to the
understanding of SVCs in sign languages, demonstrating the intrinsic
complexity of Libras grammar. Moreover, it opens new avenues for research
in the field of sign language linguistics and for a more robust
characterization of SVCs in natural languages.
|
7 |
Quantifying Computer Network SecurityBurchett, Ian 01 December 2011 (has links)
Simplifying network security data to the point that it is readily accessible and usable by a wider audience is increasingly becoming important, as networks become larger and security conditions and threats become more dynamic and complex, requiring a broader and more varied security staff makeup. With the need for a simple metric to quantify the security level on a network, this thesis proposes: simplify a network’s security risk level into a simple metric. Methods for this simplification of an entire network’s security level are conducted on several characteristic networks. Identification of computer network port vulnerabilities from NIST’s Network Vulnerability Database (NVD) are conducted, and via utilization of NVD’s Common Vulnerability Scoring System values, composite scores are created for each computer on the network, and then collectively a composite score is computed for the entire network, which accurately represents the health of the entire network. Special concerns about small numbers of highly vulnerable computers or especially critical members of the network are confronted.
|
8 |
The Relative Security Metric of Information Systems: Using AIMD AlgorithmsOwusu-Kesseh, Daniel 28 June 2016 (has links)
No description available.
|
9 |
Návrh metody pro hodnocení bezpečnostních zranitelností systémů / Design of methodology for vulnerability assesmentPecl, David January 2020 (has links)
The thesis deals with the assessment of security vulnerabilities. The aim of this work is to create a new method of vulnerability assessment, which will better prioritize critical vulnerabilities and reflect parameters that are not used in currently used methods. Firstly, it describes the common methods used to assess vulnerabilities and the parameters used in each method. The first described method is the Common Vulnerability Scoring System for which are described all three types of scores. The second analysed method is OWASP Risk Rating Methodology. The second part is devoted to the design of the own method, which aims to assess vulnerabilities that it is easier to identify those with high priority. The method is based on three groups of parameters. The first group describes the technical assessment of the vulnerability, the second is based on the requirements to ensure the confidentiality, integrity and availability of the asset and the third group of parameters evaluates the implemented security measures. All three groups of parameters are important for prioritization. Parameters describing the vulnerability are divided into permanent and up-to-date, where the most important up-to-date parameter are Threat Intelligence and easy of exploitation. The parameters of the impact on confidentiality, integrity and availability are linked to the priority of the asset, and to the evaluation of security measures, which increase the protection of confidentiality, integrity and availability. The priority of the asset and the quality of the countermeasures are assessed based on questionnaires, which are submitted to the owners of the examined assets as part of the vulnerability assessment. In the third part of the thesis, the method is compared with the currently widely used the Common Vulnerability Scoring System. The strengths of the proposed method are shown in several examples. The effectiveness of prioritization is based primarily on the priority of the asset and the security measures in place. The method was practically tested in a laboratory environment, where vulnerabilities were made on several different assets. These vulnerabilities were assessed using the proposed method, the priority of the asset and the quality of the measures were considered, and everything was included in the priority of vulnerability. This testing confirmed that the method more effectively prioritizes vulnerabilities that are easily exploitable, recently exploited by an attacker, and found on assets with minimal protection and higher priority.
|
10 |
GUI nástroj na měření zranitelností systémů pomocí knihovny OpenSCAP / GUI Tool for Vulnerability Measurement Based on OpenSCAP LibraryOberreiter, Vladimír January 2011 (has links)
This work describes the SCAP standards (Security Content Automation Protocol) determining the level of computer security and the OpenSCAP library providing a framework to the SCAP standards. It also describes the way of designing and creating security tool using the OpenSCAP library. This tool enables to search for known, potential system vulnerabilities and check the system configuration according to previously set criteria.
|
Page generated in 0.0299 seconds