Spelling suggestions: "subject:"2security bstetrics"" "subject:"2security bimetrics""
1 |
Um estudo sobre métricas e quantificação em segurança da informação / On the use of metrics and quantification in information securityMiani, Rodrigo Sanches, 1983- 23 August 2018 (has links)
Orientadores: Leonardo de Souza Mendes, Bruno Bogaz Zarpelão / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-23T07:05:01Z (GMT). No. of bitstreams: 1
Miani_RodrigoSanches_D.pdf: 2910742 bytes, checksum: e722dcc4c3bc0741a15ed5ec79cfa1ec (MD5)
Previous issue date: 2013 / Resumo: Com o aumento da frequência e diversidade de ataques, uma preocupação crescente das organizações é garantir a segurança da rede. Para compreender as ações que conduziram os incidentes e como eles podem ser mitigados, pesquisadores devem identificar e medir os fatores que influenciam os atacantes e também as vítimas. A quantificação de segurança é, em particular, importante na construção de métricas relevantes para apoiar as decisões que devem ser tomadas para a proteção de sistemas e redes. O objetivo deste trabalho foi propor soluções para auxiliar o desenvolvimento de modelos de quantificação de segurança aplicados em ambientes reais. Três diferentes abordagens foram usadas para a investigação do problema: identificação de limitações nos métodos existentes na literatura, investigação de fatores que influenciam a segurança de uma organização e a criação e aplicação de um questionário para investigar o uso de métricas na prática. Os estudos foram conduzidos usando dados fornecidos pela University of Maryland e pelo Centro de Atendimento a Incidentes de Segurança (CAIS) vinculado a Rede Nacional de Pesquisa (RNP). Os resultados mostraram que as organizações podem se beneficiar de análises mais rigorosas e eficientes a partir do uso de métricas de segurança e que a continuidade das pesquisas nessa área está intimamente ligada ao desenvolvimento de estudos em sistemas reais / Abstract: With the increase in the number and diversity of attacks, a critical concern for organizations is to keep their network secure. To understand the actions that lead to successful attacks and also how they can be mitigated, researchers should identify and measure the factors that influence both attackers and victims. Quantifying security is particularly important to construct relevant metrics that support the decisions that need to be made to protect systems and networks. In this work, we aimed at proposing solutions to support the development of security quantification models applied in real environments. Three different approaches were used to investigate the problem: identifying issues on existing methods, evaluating metrics using empirical analysis and conducting a survey to investigate metrics in practice. Studies were conducted using data provided by the University of Maryland and also by the Security Incident Response Team (CAIS) from the National Education and Research Network (RNP). Our results showed that organizations could better manage security by employing security metrics and also that future directions in this field are related to the development of studies on real systems / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
|
2 |
Guesswork and Entropy as Security Measures for Selective EncryptionLundin, Reine January 2012 (has links)
More and more effort is being spent on security improvements in today's computer environments, with the aim to achieve an appropriate level of security. However, for small computing devices it might be necessary to reduce the computational cost imposed by security in order to gain reasonable performance and/or energy consumption. To accomplish this selective encryption can be used, which provides confidentiality by only encrypting chosen parts of the information. Previous work on selective encryption has chiefly focused on how to reduce the computational cost while still making the information perceptually secure, but not on how computationally secure the selectively encrypted information is. Despite the efforts made and due to the harsh nature of computer security, good quantitative assessment methods for computer security are still lacking. Inventing new ways of measuring security are therefore needed in order to better understand, assess, and improve the security of computer environments. Two proposed probabilistic quantitative security measures are entropy and guesswork. Entropy gives the average number of guesses in an optimal binary search attack, and guesswork gives the average number of guesses in an optimal linear search attack. In information theory, a considerable amount of research has been carried out on entropy and on entropy-based metrics. However, the same does not hold for guesswork. In this thesis, we evaluate the performance improvement when using the proposed generic selective encryption scheme. We also examine the confidentiality strength of selectively encrypted information by using and adopting entropy and guesswork. Moreover, since guesswork has been less theoretical investigated compared to entropy, we extend guesswork in several ways and investigate some of its behaviors.
|
3 |
A Quantified Model of Security Policies, with an Application for Injection-Attack PreventionRay, Donald James 01 April 2016 (has links)
This dissertation generalizes traditional models of security policies, from specifications of whether programs are secure, to specifications of how secure programs are. This is a generalization from qualitative, black-and-white policies to quantitative, gray policies. Included are generalizations from traditional definitions of safety and liveness policies to definitions of gray-safety and gray-liveness policies. These generalizations preserve key properties of safety and liveness, including that the intersection of safety and liveness is a unique allow-all policy and that every policy can be written as the conjunction of a single safety and a single liveness policy. It is argued that the generalization provides several benefits, including that it serves as a unifying framework for disparate approaches to security metrics, and that it separates—in a practically useful way—specifications of how secure systems are from specifications of how secure users require their systems to be. To demonstrate the usefulness of the new model, policies for mitigating injection attacks (including both code- and noncode-injection attacks) are explored. These policies are based on novel techniques for detecting injection attacks that avoid many of the problems associated with existing mechanisms for preventing injection attacks.
|
4 |
Developing security metrics scorecard for health care organizationsElrefaey, Heba 22 January 2015 (has links)
Information security and privacy in health care is a critical issue, it is crucial to protect the patients’ privacy and ensure the systems availability all the time. Managing information security systems is a major part of managing information systems in health care organizations. The purpose of this study is to discover the security metrics that can be used in health care organizations and to provide the security managers with a security metrics scorecard that enable them to measure the performance of the security system in a monthly basis. To accomplish this a prototype with a suggested set of metrics was designed and examined in a usability study and semi-structured interviews. The study participants were security experts who work in health care organizations. In the study security management in health care organizations was discussed, the preferable security metrics were identified and the usable security metrics scorecard specifications were collected. Applying the study results on the scorecard prototype resulted in a security metrics scorecard that matches the security experts’ recommendations. / Graduate / 0723 / 0769 / 0454 / hebae@uvic.ca
|
5 |
Uma proposta de desenvolvimento de métricas para a rede da UnipampaNascimento, Tiago Belmonte 25 July 2013 (has links)
Submitted by Sandro Camargo (sandro.camargo@unipampa.edu.br) on 2015-05-09T19:15:36Z
No. of bitstreams: 1
107110009.pdf: 1750995 bytes, checksum: 7c771ac4e6d9517bfe5c709731c3743e (MD5) / Made available in DSpace on 2015-05-09T19:15:36Z (GMT). No. of bitstreams: 1
107110009.pdf: 1750995 bytes, checksum: 7c771ac4e6d9517bfe5c709731c3743e (MD5)
Previous issue date: 2013-07-25 / Um dos maiores desafios da implantação da Universidade Federal do Pampa como uma instituição pública de ensino superior no interior do Rio Grande do Sul é a estruturação de sua rede de dados. Devido às suas peculiaridades a rede de computadores da UNIPAMPA necessita de controles eficientes para garantir sua operação com estabilidade e segurança. Dessa forma, torna-se imprescindível o uso de sistemas confiáveis de comunicação que interliguem todas estas unidades descentralizadas. Em geral, a confiabilidade dos sistemas de comunicação pode ser melhorada em três grandes frentes de ação. 1) manipulação e codificação da informação, 2) melhoria de recursos como potência e banda nos canais de comunicação físicos 3) levantamento de métricas nos pontos de transmissão e recepção. A fim de colaborar neste processo, nosso trabalho consistiu na elaboração de uma proposta do uso de métricas na política de segurança desta rede, tornando mais eficiente a detecção de vulnerabilidades e a orientação de novas políticas de segurança e investimentos. As 10 métricas apresentadas e o método que foi
utilizado para gerá-las podem ser aplicados em qualquer rede com características similares à rede da Unipampa. / One of the biggest challenges in the implementation of the University of Pampa as a public university in the countryside of the state of Rio Grande do Sul is the structure of its data network. Due to its peculiarities, the Unipampa's computer network needs efficient controls to ensure operations with stability and safety. Thus, it ecomes
essential to use reliable communication systems that interconnect all these decentralized units. In general, the reliability of communication systems can be improved in three major areas of action. 1) anipulation and encoding of information, 2) improving resources such as power and bandwidth in communication physical channels 3) survey metrics at points of transmission and reception. Aiming to contribute in this process, our research consisted in elaborating a proposal of metric
use in the security policy of this network, making the vulnerability detection more efficient as well as the orientation of new policies of safety and investment. The 10 metrics and presented method was used to generate them may be applied in any network with similar characteristics to the network of Unipampa.
|
6 |
Enhancing Information Security in Cloud Computing Services using SLA based metrics / Enhancing Information Security in Cloud Computing Services using SLA based metrics, Nia, Mganga, Ramadianti Putri;, Charles, Medard January 2011 (has links)
Context: Cloud computing is a prospering technology that most organizations are considering for adoption as a cost effective strategy for managing IT. However, organizations also still consider the technology to be associated with many business risks that are not yet resolved. Such issues include security, privacy as well as legal and regulatory risks. As an initiative to address such risks, organizations can develop and implement SLA to establish common expectations and goals between the cloud provider and customer. Organizations can base on the SLA to measure the achievement of the outsourced service. However, many SLAs tend to focus on cloud computing performance whilst neglecting information security issues. Objective: We identify threats and security attributes applicable in cloud computing. We also select a framework suitable for identifying information security metrics. Moreover, we identify SLA based information security metrics in the cloud in line with the COBIT framework. Methods: We conducted a systematic literature review (SLR) to identify studies focusing on information security threats in the cloud computing. We also used SLR to select frameworks available for identification of security metrics. We used Engineering Village and Scopus online citation databases as primary sources of data for SLR. Studies were selected based on the inclusion/exclusion criteria we defined. A suitable framework was selected based on defined framework selection criteria. Based on the selected framework and conceptual review of the COBIT framework we identified SLA based information security metrics in the cloud. Results: Based on the SLR we identified security threats and attributes in the cloud. The Goal Question Metric (GQM) framework was selected as a framework suitable for identification of security metrics. Following the GQM approach and the COBIT framework we identified ten areas that are essential and related with information security in the cloud computing. In addition, covering the ten essential areas we identified 41 SLA based information security metrics that are relevant for measuring and monitoring security performance of cloud computing services. Conclusions: Cloud computing faces similar threats as traditional computing. Depending on the service and deployment model adopted, addressing security risks in the cloud may become a more challenging and complex undertaking. This situation therefore appeals to the cloud providers the need to execute their key responsibilities of creating not only a cost effective but also a secure cloud computing service. In this study, we assist both cloud provider and customers on the security issues that are to be considered for inclusion in their SLA. We have identified 41 SLA based information security metrics to aid both cloud providers and customers obtain common security performance expectations and goals. We anticipate that adoption of these metrics can help cloud providers in enhancing security in the cloud environment. The metrics will also assist cloud customers in evaluating security performance of the cloud for improvements.
|
7 |
Security Benchmarking of Transactional SystemsAraujo Neto, Afonso Comba de January 2012 (has links)
A maioria das organizações depende atualmente de algum tipo de infraestrutura computacional para suportar as atividades críticas para o negócio. Esta dependência cresce com o aumento da capacidade dos sistemas informáticos e da confiança que se pode depositar nesses sistemas, ao mesmo tempo que aumenta também o seu tamanho e complexidade. Os sistemas transacionais, tipicamente centrados em bases de dados utilizadas para armazenar e gerir a informação de suporte às tarefas diárias, sofrem naturalmente deste mesmo problema. Assim, uma solução frequentemente utilizada para amenizar a dificuldade em lidar com a complexidade dos sistemas passa por delegar sob outras organizações o trabalho de desenvolvimento, ou mesmo por utilizar soluções já disponíveis no mercado (sejam elas proprietárias ou abertas). A diversidade de software e componentes alternativos disponíveis atualmente torna necessária a existência de testes padronizados que ajudem na seleção da opção mais adequada entre as alternativas existentes, considerando uma conjunto de diferentes características. No entanto, o sucesso da investigação em testes padronizados de desempenho e confiabilidade contrasta radicalmente com os avanços em testes padronizados de segurança, os quais têm sido pouco investigados, apesar da sua extrema relevância. Esta tese discute o problema da definição de testes padronizados de segurança, comparando-o com outras iniciativas de sucesso, como a definição de testes padronizados de desempenho e de confiabilidade. Com base nesta análise é proposta um modelo de base para a definição de testes padronizados de segurança. Este modelo, aplicável de forma genérica a diversos tipos de sistemas e domínios, define duas etapas principais: qualificação de segurança e teste padronizado de confiança. A qualificação de segurança é um processo que permite avaliar um sistema tendo em conta os aspectos e requisitos de segurança mais evidentes num determinado domínio de aplicação, dividindo os sistemas avaliados entre aceitáveis e não aceitáveis. O teste padronizado de confiança, por outro lado, consiste em avaliar os sistemas considerados aceitáveis de modo a estimar a probabilidade de existirem problemas de segurança ocultados ou difíceis de detectar (o objetivo do processo é lidar com as incertezas inerentes aos aspectos de segurança). O modelo proposto é demonstrado e avaliado no contexto de sistemas transacionais, os quais podem ser divididos em duas partes: a infraestrutura e as aplicações de negócio. Uma vez que cada uma destas partes possui objetivos de segurança distintos, o modelo é utilizado no desenvolvimento de metodologias adequadas para cada uma delas. Primeiro, a tese apresenta um teste padronizado de segurança para infraestruturas de sistemas transacionais, descrevendo e justificando todos os passos e decisões tomadas ao longo do seu desenvolvimento. Este teste foi aplicado a quatro infraestruturas reais, sendo os resultados obtidos cuidadosamente apresentados e analisados. Ainda no contexto das infraestruturas de sistemas transacionais, a tese discute o problema da seleção de componentes de software. Este é um problema complexo uma vez que a avaliação de segurança destas infraestruturas não é exequível antes da sua entrada em funcionamento. A ferramenta proposta, que tem por objetivo ajudar na seleção do software básico para suportar este tipo de infraestrutura, é aplicada na avaliação e análise de sete pacotes de software distintos, todos alternativas tipicamente utilizadas em infraestruturas reais. Finalmente, a tese aborda o problema do desenvolvimento de testes padronizados de confiança para aplicações de negócio, focando especificamente em aplicações Web. Primeiro, é proposta uma abordagem baseada no uso de ferramentas de análise de código, sendo apresentadas as diversas experiências realizadas para avaliar a validade da proposta, incluindo um cenário representativo de situações reais, em que o objetivo passa por selecionar o mais seguro de entre sete alternativas de software para suportar fóruns Web. Com base nas análises realizadas e nas limitações desta proposta, é de seguida definida uma abordagem genérica para a definição de testes padronizados de confiança para aplicações Web. / Most organizations nowadays depend on some kind of computer infrastructure to manage business critical activities. This dependence grows as computer systems become more reliable and useful, but so does the complexity and size of systems. Transactional systems, which are database-centered applications used by most organizations to support daily tasks, are no exception. A typical solution to cope with systems complexity is to delegate the software development task, and to use existing solutions independently developed and maintained (either proprietary or open source). The multiplicity of software and component alternatives available has boosted the interest in suitable benchmarks, able to assist in the selection of the best candidate solutions, concerning several attributes. However, the huge success of performance and dependability benchmarking markedly contrasts with the small advances on security benchmarking, which has only sparsely been studied in the past. his thesis discusses the security benchmarking problem and main characteristics, particularly comparing these with other successful benchmarking initiatives, like performance and dependability benchmarking. Based on this analysis, a general framework for security benchmarking is proposed. This framework, suitable for most types of software systems and application domains, includes two main phases: security qualification and trustworthiness benchmarking. Security qualification is a process designed to evaluate the most obvious and identifiable security aspects of the system, dividing the evaluated targets in acceptable or unacceptable, given the specific security requirements of the application domain. Trustworthiness benchmarking, on the other hand, consists of an evaluation process that is applied over the qualified targets to estimate the probability of the existence of hidden or hard to detect security issues in a system (the main goal is to cope with the uncertainties related to security aspects). The framework is thoroughly demonstrated and evaluated in the context of transactional systems, which can be divided in two parts: the infrastructure and the business applications. As these parts have significantly different security goals, the framework is used to develop methodologies and approaches that fit their specific characteristics. First, the thesis proposes a security benchmark for transactional systems infrastructures and describes, discusses and justifies all the steps of the process. The benchmark is applied to four distinct real infrastructures, and the results of the assessment are thoroughly analyzed. Still in the context of transactional systems infrastructures, the thesis also addresses the problem of the selecting software components. This is complex as evaluating the security of an infrastructure cannot be done before deployment. The proposed tool, aimed at helping in the selection of basic software packages to support the infrastructure, is used to evaluate seven different software packages, representative alternatives for the deployment of real infrastructures. Finally, the thesis discusses the problem of designing trustworthiness benchmarks for business applications, focusing specifically on the case of web applications. First, a benchmarking approach based on static code analysis tools is proposed. Several experiments are presented to evaluate the effectiveness of the proposed metrics, including a representative experiment where the challenge was the selection of the most secure application among a set of seven web forums. Based on the analysis of the limitations of such approach, a generic approach for the definition of trustworthiness benchmarks for web applications is defined.
|
8 |
A Study of Scalability and Performance of Solaris ZonesXu, Yuan January 2007 (has links)
<p>This thesis presents a quantitative evaluation of an operating system virtualization technology known as Solaris Containers or Solaris Zones, with a special emphasis on measuring the influence of a security technology known as Solaris Trusted Extensions. Solaris Zones is an operating system-level (OS-level) virtualization technology embedded in the Solaris OS that primarily provides containment of processes within the abstraction of a complete operating system environment. Solaris Trusted Extensions presents a specific configuration of the Solaris operating system that is designed to offer multi-level security functionality.</p><p>Firstly, we examine the scalability of the OS with respect to an increasing number of zones. Secondly, we evaluate the performance of zones in three scenarios. In the first scenario we measure - as a baseline - the performance of Solaris Zones on a 2-CPU core machine in the standard configuration that is distributed as part of the Solaris OS. In the second scenario we investigate the influence of the number of CPU cores. In the third scenario we evaluate the performance in the presence of a security configuration known as Solaris Trusted Extensions. To evaluate performance, we calculate a number of metrics using the AIM benchmark. We calculate these benchmarks for the global zone, a non-global zone, and increasing numbers of concurrently running non-global zones. We aggregate the results of the latter to compare aggregate system performance against single zone performance.</p><p>The results of this study demonstrate the scalability and performance impact of Solaris Zones in the Solaris OS. On our chosen hardware platform, Solaris Zones scales to about 110 zones within a short creation time (i.e., less than 13 minutes per zone for installation, configuration, and boot.) As the number of zones increases, the measured overhead of virtualization shows less than 2% of performance decrease for most measured benchmarks, with one exception: the benchmarks for memory and process management show that performance decreases of 5-12% (depending on the sub-benchmark) are typical. When evaluating the Trusted Extensions-based security configuration, additional small performance penalties were measured in the areas of Disk/Filesystem I/O and Inter Process Communication. Most benchmarks show that aggregate system performance is higher when distributing system load across multiple zones compared to running the same load in a single zone.</p>
|
9 |
A Framework Based On Continuous Security MonitoringErturk, Volkan 01 December 2008 (has links) (PDF)
Continuous security monitoring is the process of following up the IT systems by collecting measurements, reporting and analysis of the results for comparing the security level of the organization on continuous time axis to see how organizational security is progressing in the course of time. In the related literature there is very limited work done to continuously monitor the security of the organizations. In this thesis, a continuous security monitoring framework based on security metrics is proposed. Moreover, to decrease the burden of implementation a software tool called SecMon is introduced. The implementation of the framework in a public organization shows that the proposed system is successful for building an organizational memory and giving insight to the security stakeholders about the IT security level in the organization.
|
10 |
Security Benchmarking of Transactional SystemsAraujo Neto, Afonso Comba de January 2012 (has links)
A maioria das organizações depende atualmente de algum tipo de infraestrutura computacional para suportar as atividades críticas para o negócio. Esta dependência cresce com o aumento da capacidade dos sistemas informáticos e da confiança que se pode depositar nesses sistemas, ao mesmo tempo que aumenta também o seu tamanho e complexidade. Os sistemas transacionais, tipicamente centrados em bases de dados utilizadas para armazenar e gerir a informação de suporte às tarefas diárias, sofrem naturalmente deste mesmo problema. Assim, uma solução frequentemente utilizada para amenizar a dificuldade em lidar com a complexidade dos sistemas passa por delegar sob outras organizações o trabalho de desenvolvimento, ou mesmo por utilizar soluções já disponíveis no mercado (sejam elas proprietárias ou abertas). A diversidade de software e componentes alternativos disponíveis atualmente torna necessária a existência de testes padronizados que ajudem na seleção da opção mais adequada entre as alternativas existentes, considerando uma conjunto de diferentes características. No entanto, o sucesso da investigação em testes padronizados de desempenho e confiabilidade contrasta radicalmente com os avanços em testes padronizados de segurança, os quais têm sido pouco investigados, apesar da sua extrema relevância. Esta tese discute o problema da definição de testes padronizados de segurança, comparando-o com outras iniciativas de sucesso, como a definição de testes padronizados de desempenho e de confiabilidade. Com base nesta análise é proposta um modelo de base para a definição de testes padronizados de segurança. Este modelo, aplicável de forma genérica a diversos tipos de sistemas e domínios, define duas etapas principais: qualificação de segurança e teste padronizado de confiança. A qualificação de segurança é um processo que permite avaliar um sistema tendo em conta os aspectos e requisitos de segurança mais evidentes num determinado domínio de aplicação, dividindo os sistemas avaliados entre aceitáveis e não aceitáveis. O teste padronizado de confiança, por outro lado, consiste em avaliar os sistemas considerados aceitáveis de modo a estimar a probabilidade de existirem problemas de segurança ocultados ou difíceis de detectar (o objetivo do processo é lidar com as incertezas inerentes aos aspectos de segurança). O modelo proposto é demonstrado e avaliado no contexto de sistemas transacionais, os quais podem ser divididos em duas partes: a infraestrutura e as aplicações de negócio. Uma vez que cada uma destas partes possui objetivos de segurança distintos, o modelo é utilizado no desenvolvimento de metodologias adequadas para cada uma delas. Primeiro, a tese apresenta um teste padronizado de segurança para infraestruturas de sistemas transacionais, descrevendo e justificando todos os passos e decisões tomadas ao longo do seu desenvolvimento. Este teste foi aplicado a quatro infraestruturas reais, sendo os resultados obtidos cuidadosamente apresentados e analisados. Ainda no contexto das infraestruturas de sistemas transacionais, a tese discute o problema da seleção de componentes de software. Este é um problema complexo uma vez que a avaliação de segurança destas infraestruturas não é exequível antes da sua entrada em funcionamento. A ferramenta proposta, que tem por objetivo ajudar na seleção do software básico para suportar este tipo de infraestrutura, é aplicada na avaliação e análise de sete pacotes de software distintos, todos alternativas tipicamente utilizadas em infraestruturas reais. Finalmente, a tese aborda o problema do desenvolvimento de testes padronizados de confiança para aplicações de negócio, focando especificamente em aplicações Web. Primeiro, é proposta uma abordagem baseada no uso de ferramentas de análise de código, sendo apresentadas as diversas experiências realizadas para avaliar a validade da proposta, incluindo um cenário representativo de situações reais, em que o objetivo passa por selecionar o mais seguro de entre sete alternativas de software para suportar fóruns Web. Com base nas análises realizadas e nas limitações desta proposta, é de seguida definida uma abordagem genérica para a definição de testes padronizados de confiança para aplicações Web. / Most organizations nowadays depend on some kind of computer infrastructure to manage business critical activities. This dependence grows as computer systems become more reliable and useful, but so does the complexity and size of systems. Transactional systems, which are database-centered applications used by most organizations to support daily tasks, are no exception. A typical solution to cope with systems complexity is to delegate the software development task, and to use existing solutions independently developed and maintained (either proprietary or open source). The multiplicity of software and component alternatives available has boosted the interest in suitable benchmarks, able to assist in the selection of the best candidate solutions, concerning several attributes. However, the huge success of performance and dependability benchmarking markedly contrasts with the small advances on security benchmarking, which has only sparsely been studied in the past. his thesis discusses the security benchmarking problem and main characteristics, particularly comparing these with other successful benchmarking initiatives, like performance and dependability benchmarking. Based on this analysis, a general framework for security benchmarking is proposed. This framework, suitable for most types of software systems and application domains, includes two main phases: security qualification and trustworthiness benchmarking. Security qualification is a process designed to evaluate the most obvious and identifiable security aspects of the system, dividing the evaluated targets in acceptable or unacceptable, given the specific security requirements of the application domain. Trustworthiness benchmarking, on the other hand, consists of an evaluation process that is applied over the qualified targets to estimate the probability of the existence of hidden or hard to detect security issues in a system (the main goal is to cope with the uncertainties related to security aspects). The framework is thoroughly demonstrated and evaluated in the context of transactional systems, which can be divided in two parts: the infrastructure and the business applications. As these parts have significantly different security goals, the framework is used to develop methodologies and approaches that fit their specific characteristics. First, the thesis proposes a security benchmark for transactional systems infrastructures and describes, discusses and justifies all the steps of the process. The benchmark is applied to four distinct real infrastructures, and the results of the assessment are thoroughly analyzed. Still in the context of transactional systems infrastructures, the thesis also addresses the problem of the selecting software components. This is complex as evaluating the security of an infrastructure cannot be done before deployment. The proposed tool, aimed at helping in the selection of basic software packages to support the infrastructure, is used to evaluate seven different software packages, representative alternatives for the deployment of real infrastructures. Finally, the thesis discusses the problem of designing trustworthiness benchmarks for business applications, focusing specifically on the case of web applications. First, a benchmarking approach based on static code analysis tools is proposed. Several experiments are presented to evaluate the effectiveness of the proposed metrics, including a representative experiment where the challenge was the selection of the most secure application among a set of seven web forums. Based on the analysis of the limitations of such approach, a generic approach for the definition of trustworthiness benchmarks for web applications is defined.
|
Page generated in 0.0696 seconds