Spelling suggestions: "subject:"2security evaluatuation"" "subject:"2security evalualuation""
1 |
Tourism in an unstable and complex world? : searching for a relevant political risk paradigm and model for tourism organisationsPiekarz, Mark J. January 2008 (has links)
This work has a single aim, focusing on developing a political risk model relevant for tourism organisations, which are operating in an increasingly complex and turbulent international environment. It pays particular attention to the language of risk (how risks are articulated and described), the culture of risk (how risks are viewed), and the risk process (how they are analysed and assessed). The work critically evaluates a variety of methods that can be utilised to scan, analyse and assess political hazards and risks. It finds that many of the existing methods of political and country risk assessment are limited and not sufficiently contextualised to the needs of the tourism industry. Whilst many models can have an attractive façade of using positivistic methods to calculate political risks, in practice these are fraught with problems. The study also highlights a more complex relationship between tourism and political instability, whereby tourism can be characterised as much by its robustness, as its sensitivity. A model is developed which primarily adapts a systems theory approach, whereby a language, culture and practical process is developed through which the analysis of various factors and indicators can take place. The approach adopted has a number of stages, which vary in the amount of data necessary for the analysis and assessment of political risks. The model begins by utilising existing travel advice databases, moving onto an analysis of the frequency of past events, then to the nature of the political system itself, finishing with an analysis and assessment of more complex input factors and indicators which relate to notions of causation. One of the more provocative features of the model is the argument that it is more than possible to make an assessment of the risks that the political environment can pose to a tourism organisation, without necessarily understanding theories of causation.
|
2 |
A model for the evaluation of control with reference to a simple path context model in a UNIX environment08 September 2015 (has links)
M.Com. / Information and the IT systems that support it are important business assets. Their availability, integrity and confidentiality are essential to maintain an organisations competitive edge, cash flow, profitability, company image and compliance with legal requirements. Organisations world-wide are now facing increased security threats from a wide range of sources. Information systems may be the target of a range of serious threats including computer-based fraud, espionage, sabotage, vandalism and other sources of failure or disaster ...
|
3 |
Side Channel Leakage Analysis - Detection, Exploitation and QuantificationYe, Xin 27 January 2015 (has links)
Nearly twenty years ago the discovery of side channel attacks has warned the world that security is more than just a mathematical problem. Serious considerations need to be placed on the implementation and its physical media. Nowadays the ever-growing ubiquitous computing calls for in-pace development of security solutions. Although the physical security has attracted increasing public attention, side channel security remains as a problem that is far from being completely solved. An important problem is how much expertise is required by a side channel adversary. The essential interest is to explore whether detailed knowledge about implementation and leakage model are indispensable for a successful side channel attack. If such knowledge is not a prerequisite, attacks can be mounted by even inexperienced adversaries. Hence the threat from physical observables may be underestimated. Another urgent problem is how to secure a cryptographic system in the exposure of unavoidable leakage. Although many countermeasures have been developed, their effectiveness pends empirical verification and the side channel security needs to be evaluated systematically. The research in this dissertation focuses on two topics, leakage-model independent side channel analysis and security evaluation, which are described from three perspectives: leakage detection, exploitation and quantification. To free side channel analysis from the complicated procedure of leakage modeling, an observation to observation comparison approach is proposed. Several attacks presented in this work follow this approach. They exhibit efficient leakage detection and exploitation under various leakage models and implementations. More importantly, this achievement no longer relies on or even requires precise leakage modeling. For the security evaluation, a weak maximum likelihood approach is proposed. It provides a quantification of the loss of full key security due to the presence of side channel leakage. A constructive algorithm is developed following this approach. The algorithm can be used by security lab to measure the leakage resilience. It can also be used by a side channel adversary to determine whether limited side channel information suffices the full key recovery at affordable expense.
|
4 |
Evaluation of the Security of Components in Distributed Information Systems / Värdering av komponenters säkerhet i distribuerade informations systemAndersson, Richard January 2003 (has links)
<p>This thesis suggests a security evaluation framework for distributed information systems, responsible for generating a system modelling technique and an evaluation method. The framework is flexible and divides the problem space into smaller, more accomplishable subtasks with the means to focus on specific problems, aspects or system scopes. The information system is modelled by dividing it into increasingly smaller parts, evaluate the separate parts and then build up the system “bottom up” by combining the components. Evaluated components are stored as reusable instances in a component library. The evaluation method is focusing on technological components and is based on the Security Functional Requirements (SFR) of the Common Criteria. The method consists of the following steps: (1) define several security values with different aspects, to get variable evaluations (2) change and establish the set of SFR to fit the thesis, (3) interpret evaluated security functions, and possibly translate them to CIA or PDR, (4) map characteristics from system components to SFR and (5) combine evaluated components into an evaluated subsystem. An ontology is used to, in a versatile and dynamic way, structure the taxonomy and relations of the system components, the security functions, the security values and the risk handling. It is also a step towards defining a common terminology for IT security.</p>
|
5 |
CAESAR : A proposed method for evaluating security in component-based distributed information systemsPeterson, Mikael January 2004 (has links)
<p>Background: The network-centric defense requires a method for securing vast dynamic distributed information systems. Currently, there are no efficient methods for establishing the level of IT security in vast dynamic distributed information systems. </p><p>Purpose: The target of this thesis was to design a method, capable of determining the level of IT security of vast dynamic component-based distributed information systems. </p><p>Method: The work was carried out by first defining concepts of IT security and distributed information systems and by reviewing basic measurement and modeling theory. Thereafter, previous evaluation methods aimed at determining the level of IT security of distributed information systems were reviewed. Last, by using the theoretic foundation and the ideas from reviewed efforts, a new evaluation method, aimed at determining the level of IT security of vast dynamic component-based distributed information systems, was developed. </p><p>Results: This thesis outlines a new method, CAESAR, capable of predicting the security level in parts of, or an entire, component-based distributed information system. The CAESAR method consists of a modeling technique and an evaluation algorithm. In addition, a Microsoft Windows compliant software, ROME, which allows the user to easily model and evaluate distributed systems using the CAESAR method, is made available.</p>
|
6 |
CAESAR : A proposed method for evaluating security in component-based distributed information systemsPeterson, Mikael January 2004 (has links)
Background: The network-centric defense requires a method for securing vast dynamic distributed information systems. Currently, there are no efficient methods for establishing the level of IT security in vast dynamic distributed information systems. Purpose: The target of this thesis was to design a method, capable of determining the level of IT security of vast dynamic component-based distributed information systems. Method: The work was carried out by first defining concepts of IT security and distributed information systems and by reviewing basic measurement and modeling theory. Thereafter, previous evaluation methods aimed at determining the level of IT security of distributed information systems were reviewed. Last, by using the theoretic foundation and the ideas from reviewed efforts, a new evaluation method, aimed at determining the level of IT security of vast dynamic component-based distributed information systems, was developed. Results: This thesis outlines a new method, CAESAR, capable of predicting the security level in parts of, or an entire, component-based distributed information system. The CAESAR method consists of a modeling technique and an evaluation algorithm. In addition, a Microsoft Windows compliant software, ROME, which allows the user to easily model and evaluate distributed systems using the CAESAR method, is made available.
|
7 |
Evaluation of the Security of Components in Distributed Information Systems / Värdering av komponenters säkerhet i distribuerade informations systemAndersson, Richard January 2003 (has links)
This thesis suggests a security evaluation framework for distributed information systems, responsible for generating a system modelling technique and an evaluation method. The framework is flexible and divides the problem space into smaller, more accomplishable subtasks with the means to focus on specific problems, aspects or system scopes. The information system is modelled by dividing it into increasingly smaller parts, evaluate the separate parts and then build up the system “bottom up” by combining the components. Evaluated components are stored as reusable instances in a component library. The evaluation method is focusing on technological components and is based on the Security Functional Requirements (SFR) of the Common Criteria. The method consists of the following steps: (1) define several security values with different aspects, to get variable evaluations (2) change and establish the set of SFR to fit the thesis, (3) interpret evaluated security functions, and possibly translate them to CIA or PDR, (4) map characteristics from system components to SFR and (5) combine evaluated components into an evaluated subsystem. An ontology is used to, in a versatile and dynamic way, structure the taxonomy and relations of the system components, the security functions, the security values and the risk handling. It is also a step towards defining a common terminology for IT security.
|
8 |
Evaluation of Current Practical Attacks Against RFID TechnologyKashfi, Hamid January 2014 (has links)
Radio Frequency Identification (RFID) is a technology that has been around for three decades now. It is being used in various scenarios in technologically modern societies around the world and becoming a crucial part of our daily life. But we often forget how the inner technology is designed to work, or even if it is as trustable and secure as we think. While the RFID technology and protocols involved with it has been designed with an acceptable level of security in mind, not all implementations and use cases are as secure as consumers believe. A majority of implementations and products that are deployed suffer from known and critical security issues. This thesis work starts with an introduction to RFID standards and how the technology works. Followed by that a taxonomy of known attacks and threats affecting RFID is presented, which avoids going through too much of technical details but provides references for farther research and study for every part and attack. Then RFID security threats are reviewed from risk management point of view, linking introduced attacks to the security principle they affect. We also review (lack thereof) security standards and guidelines that can help mitigating introduced threats. Finally to demonstrate how practical and serious these threats are, three real-world case studies are presented, in which we break security of widely used RFID implementations. At the end we also review and highlight domains in RFID security that can be researched farther, and what materials we are currently missing, that can be used to raise awareness and increase security of RFID technology for consumers. The goal of this thesis report is to familiarize readers with all of the publicly documented and known security issues of RFID technology, so that they can get a sense about the security state of their systems. Without getting involved with too much technical details about every attack vector, or going throw tens of different books and papers, readers can use this report as a comprehensive reference to educate themselves about all known attacks against RFID, published to the date of writing this report.
|
9 |
Security Benchmarking of Transactional SystemsAraujo Neto, Afonso Comba de January 2012 (has links)
A maioria das organizações depende atualmente de algum tipo de infraestrutura computacional para suportar as atividades críticas para o negócio. Esta dependência cresce com o aumento da capacidade dos sistemas informáticos e da confiança que se pode depositar nesses sistemas, ao mesmo tempo que aumenta também o seu tamanho e complexidade. Os sistemas transacionais, tipicamente centrados em bases de dados utilizadas para armazenar e gerir a informação de suporte às tarefas diárias, sofrem naturalmente deste mesmo problema. Assim, uma solução frequentemente utilizada para amenizar a dificuldade em lidar com a complexidade dos sistemas passa por delegar sob outras organizações o trabalho de desenvolvimento, ou mesmo por utilizar soluções já disponíveis no mercado (sejam elas proprietárias ou abertas). A diversidade de software e componentes alternativos disponíveis atualmente torna necessária a existência de testes padronizados que ajudem na seleção da opção mais adequada entre as alternativas existentes, considerando uma conjunto de diferentes características. No entanto, o sucesso da investigação em testes padronizados de desempenho e confiabilidade contrasta radicalmente com os avanços em testes padronizados de segurança, os quais têm sido pouco investigados, apesar da sua extrema relevância. Esta tese discute o problema da definição de testes padronizados de segurança, comparando-o com outras iniciativas de sucesso, como a definição de testes padronizados de desempenho e de confiabilidade. Com base nesta análise é proposta um modelo de base para a definição de testes padronizados de segurança. Este modelo, aplicável de forma genérica a diversos tipos de sistemas e domínios, define duas etapas principais: qualificação de segurança e teste padronizado de confiança. A qualificação de segurança é um processo que permite avaliar um sistema tendo em conta os aspectos e requisitos de segurança mais evidentes num determinado domínio de aplicação, dividindo os sistemas avaliados entre aceitáveis e não aceitáveis. O teste padronizado de confiança, por outro lado, consiste em avaliar os sistemas considerados aceitáveis de modo a estimar a probabilidade de existirem problemas de segurança ocultados ou difíceis de detectar (o objetivo do processo é lidar com as incertezas inerentes aos aspectos de segurança). O modelo proposto é demonstrado e avaliado no contexto de sistemas transacionais, os quais podem ser divididos em duas partes: a infraestrutura e as aplicações de negócio. Uma vez que cada uma destas partes possui objetivos de segurança distintos, o modelo é utilizado no desenvolvimento de metodologias adequadas para cada uma delas. Primeiro, a tese apresenta um teste padronizado de segurança para infraestruturas de sistemas transacionais, descrevendo e justificando todos os passos e decisões tomadas ao longo do seu desenvolvimento. Este teste foi aplicado a quatro infraestruturas reais, sendo os resultados obtidos cuidadosamente apresentados e analisados. Ainda no contexto das infraestruturas de sistemas transacionais, a tese discute o problema da seleção de componentes de software. Este é um problema complexo uma vez que a avaliação de segurança destas infraestruturas não é exequível antes da sua entrada em funcionamento. A ferramenta proposta, que tem por objetivo ajudar na seleção do software básico para suportar este tipo de infraestrutura, é aplicada na avaliação e análise de sete pacotes de software distintos, todos alternativas tipicamente utilizadas em infraestruturas reais. Finalmente, a tese aborda o problema do desenvolvimento de testes padronizados de confiança para aplicações de negócio, focando especificamente em aplicações Web. Primeiro, é proposta uma abordagem baseada no uso de ferramentas de análise de código, sendo apresentadas as diversas experiências realizadas para avaliar a validade da proposta, incluindo um cenário representativo de situações reais, em que o objetivo passa por selecionar o mais seguro de entre sete alternativas de software para suportar fóruns Web. Com base nas análises realizadas e nas limitações desta proposta, é de seguida definida uma abordagem genérica para a definição de testes padronizados de confiança para aplicações Web. / Most organizations nowadays depend on some kind of computer infrastructure to manage business critical activities. This dependence grows as computer systems become more reliable and useful, but so does the complexity and size of systems. Transactional systems, which are database-centered applications used by most organizations to support daily tasks, are no exception. A typical solution to cope with systems complexity is to delegate the software development task, and to use existing solutions independently developed and maintained (either proprietary or open source). The multiplicity of software and component alternatives available has boosted the interest in suitable benchmarks, able to assist in the selection of the best candidate solutions, concerning several attributes. However, the huge success of performance and dependability benchmarking markedly contrasts with the small advances on security benchmarking, which has only sparsely been studied in the past. his thesis discusses the security benchmarking problem and main characteristics, particularly comparing these with other successful benchmarking initiatives, like performance and dependability benchmarking. Based on this analysis, a general framework for security benchmarking is proposed. This framework, suitable for most types of software systems and application domains, includes two main phases: security qualification and trustworthiness benchmarking. Security qualification is a process designed to evaluate the most obvious and identifiable security aspects of the system, dividing the evaluated targets in acceptable or unacceptable, given the specific security requirements of the application domain. Trustworthiness benchmarking, on the other hand, consists of an evaluation process that is applied over the qualified targets to estimate the probability of the existence of hidden or hard to detect security issues in a system (the main goal is to cope with the uncertainties related to security aspects). The framework is thoroughly demonstrated and evaluated in the context of transactional systems, which can be divided in two parts: the infrastructure and the business applications. As these parts have significantly different security goals, the framework is used to develop methodologies and approaches that fit their specific characteristics. First, the thesis proposes a security benchmark for transactional systems infrastructures and describes, discusses and justifies all the steps of the process. The benchmark is applied to four distinct real infrastructures, and the results of the assessment are thoroughly analyzed. Still in the context of transactional systems infrastructures, the thesis also addresses the problem of the selecting software components. This is complex as evaluating the security of an infrastructure cannot be done before deployment. The proposed tool, aimed at helping in the selection of basic software packages to support the infrastructure, is used to evaluate seven different software packages, representative alternatives for the deployment of real infrastructures. Finally, the thesis discusses the problem of designing trustworthiness benchmarks for business applications, focusing specifically on the case of web applications. First, a benchmarking approach based on static code analysis tools is proposed. Several experiments are presented to evaluate the effectiveness of the proposed metrics, including a representative experiment where the challenge was the selection of the most secure application among a set of seven web forums. Based on the analysis of the limitations of such approach, a generic approach for the definition of trustworthiness benchmarks for web applications is defined.
|
10 |
A Quantitative Evaluation Framework for Component Security in Distributed Information Systems / Ett kvantitativt utvärderingsramverk för komponenters säkerhet i distribuerade informationssystemBond, Anders, Påhlsson, Nils January 2004 (has links)
<p>The Heimdal Framework presented in this thesis is a step towards an unambiguous framework that reveals the objective strength and weaknesses of the security of components. It provides a way to combine different aspects affecting the security of components - such as category requirements, implemented security functionality and the environment in which it operates - in a modular way, making each module replaceable in the event that a more accurate module is developed. </p><p>The environment is assessed and quantified through a methodology presented as a part of the Heimdal Framework. The result of the evaluation is quantitative data, which can be presented with varying degrees of detail, reflecting the needs of the evaluator. </p><p>The framework is flexible and divides the problem space into smaller, more accomplishable subtasks with the means to focus on specific problems, aspects or system scopes. The evaluation method is focusing on technological components and is based on, but not limited to, the Security Functional Requirements (SFR) of the Common Criteria.</p>
|
Page generated in 0.1011 seconds