Spelling suggestions: "subject:"control flow"" "subject:"coontrol flow""
31 |
Improving operating systems security: two case studiesWei, Jinpeng 14 August 2009 (has links)
Malicious attacks on computer systems attempt to obtain and maintain illicit control over the victim system. To obtain unauthorized access, they often exploit vulnerabilities in the victim system, and to maintain illicit control, they apply various hiding techniques to remain stealthy. In this dissertation, we discuss and present solutions for two classes of security problems: TOCTTOU (time-of-check-to-time-of-use) and K-Queue. TOCTTOU is a vulnerability that can be exploited to obtain unauthorized root access, and K-Queue is a hiding technique that can be used to maintain stealthy control of the victim kernel.
The first security problem is TOCTTOU, a race condition in Unix-style file systems in which an attacker exploits a small timing gap between a file system call that checks a condition and a use kernel call that depends on the condition. Our contributions on TOCTTOU include: (1) A model that enumerates the complete set of potential TOCTTOU vulnerabilities; (2) A set of tools that detect TOCTTOU vulnerabilities in Linux applications such as vi, gedit, and rpm; (3) A theoretical as well as an experimental evaluation of security risks that shows that TOCTTOU vulnerabilities can no longer be considered "low risk" given the wide-scale deployment of multiprocessors; (4) An event-driven protection mechanism and its implementation that defend Linux applications against TOCTTOU attacks at low performance overhead.
The second security problem addressed in this dissertation is kernel queue or K-Queue, which can be used by the attacker to achieve continual malicious function execution without persistently changing either kernel code or data, which prevents state-of-the-art kernel integrity monitors such as CFI and SBCFI from detecting them. Based on our successful defense against a concrete instance of K-Queue-driven attacks that use the soft timer mechanism, we design and implement a solution to the general class of K-Queue-driven attacks, including (1) a unified static analysis framework and toolset that can generate specifications of legitimate K-Queue requests and the checker code in an automated way; (2) a runtime reference monitor that validates K-Queue invariants and guards such invariants against tampering; and (3) a comprehensive experimental evaluation of our static analysis framework and K-Queue Checkers.
|
32 |
Software Architecture-Based Failure PredictionMohamed, ATEF 28 September 2012 (has links)
Depending on the role of software in everyday life, the cost of a software failure can sometimes be unaffordable. During system execution, errors may occur in system components and failures may be manifested due to these errors. These errors differ with respect to their effects on system behavior and consequent failure manifestation manners. Predicting failures before their manifestation is important to assure system resilience. It helps avoid the cost of failures and enables systems to perform corrective actions prior to failure occurrences. However, effective runtime error detection and failure prediction techniques encounter a prohibitive challenge with respect to the control flow representation of large software systems with intricate control flow structures. In this thesis, we provide a technique for failure prediction from runtime errors of large software systems. Aiming to avoid the possible difficulties and inaccuracies of the existing Control Flow Graph (CFG) structures, we first propose a Connection Dependence Graph (CDG) for control flow representation of large software systems. We describe the CDG structure and explain how to derive it from program source code. Second, we utilize the proposed CDG to provide a connection-based signature approach for control flow error detection. We describe the monitor structure and present the error checking algorithm. Finally, we utilize the detected errors and erroneous state parameters to predict failure occurrences and modes during system runtime. We craft runtime signatures based on these errors and state parameters. Using system error and failure history, we determine a predictive function (an estimator) for each failure mode based on these signatures. Our experimental evaluation for these techniques uses a large open-source software (PostgreSQL 8.4.4 database system). The results show highly efficient control flow representation, error detection, and failure prediction techniques. This work contributes to software reliability by providing a simple and accurate control flow representation and utilizing it to detect runtime errors and predict failure occurrences and modes with high accuracy. / Thesis (Ph.D, Computing) -- Queen's University, 2012-09-25 23:44:12.356
|
33 |
A Framework for Metamorphic Malware Analysis and Real-Time DetectionAlam, Shahid 19 August 2014 (has links)
Metamorphism is a technique that mutates the binary code using different obfuscations. It is difficult to write a new metamorphic malware and in general malware writers reuse old malware. To hide detection the malware writers change the obfuscations (syntax) more than the behavior (semantic) of such a new malware. On this assumption and motivation, this thesis presents a new framework named MARD for Metamorphic Malware Analysis and Real-Time Detection. We also introduce a new intermediate language named MAIL (Malware Analysis Intermediate Language). Each MAIL statement is assigned a pattern that can be used to annotate a control flow graph for pattern matching to analyse and detect metamorphic malware. MARD uses MAIL to achieve platform independence, automation and optimizations for metamorphic malware analysis and detection. As part of the new framework, to build a behavioral signature and detect metamorphic malware in real-time, we propose two novel techniques, named ACFG (Annotated Control Flow Graph) and SWOD-CFWeight (Sliding Window of Difference and Control Flow Weight). Unlike other techniques, ACFG provides a faster matching of CFGs, without compromising
detection accuracy; it can handle malware with smaller CFGs, and contains more information and hence provides more accuracy than a CFG. SWOD-CFWeight mitigates and addresses key issues in current techniques, related to the change of the frequencies of opcodes, such as the use of different compilers, compiler optimizations, operating systems and obfuscations. The size of SWOD can change, which gives anti-malware tool developers the ability to select appropriate parameter values to further optimize malware detection. CFWeight captures the control flow semantics of a program to an extent that helps detect metamorphic malware in real-time. Experimental evaluation of the two proposed techniques, using an existing dataset, achieved detection rates in the range 94% - 99.6% and false positive rates in the range 0.93% - 12.44%. Compared to ACFG, SWOD-CFWeight significantly improves the detection time, and is suitable to be used where the time for malware detection is more important as in real-time (practical) anti-malware applications. / Graduate / 0984 / alam_shahid@yahoo.com
|
34 |
Optical flow based obstacle avoidance for micro air vehiclesJain, Ashish. January 2005 (has links)
Thesis (M.S.)--University of Florida, 2005. / Title from title page of source document. Document formatted into pages; contains 42 pages. Includes vita. Includes bibliographical references.
|
35 |
Implementação e avaliação da técnica ACCE para detecção e correção de erros de fluxo de controle no LLVM / Implementation and evaluation of the ACCE technique to detection and correction of control flow errors in the LLVMParizi, Rafael Baldiati January 2013 (has links)
Técnicas de prevenção de falhas como testes e verificação de software não são suficientes para prover dependabilidade a sistemas, visto que não são capazes de tratar falhas ocasionadas por eventos externos tais como falhas transientes. Nessas situações faz-se necessária a aplicação de técnicas capazes de tratar e tolerar falhas que ocorram durante a execução do software. Grande parte das técnicas de tolerância a falhas transientes está focada na detecção e correção de erros de fluxo de controle, que podem corresponder a até 70% de erros causados por esse tipo de falha. Essas técnicas tratam as falhas em nível de software, alterando o programa com a inserção de novas instruções que devem capturar e corrigir desvios inesperados ocorridos durante a execução do software, sendo ACCE uma das técnicas mais conhecidas. Neste trabalho foi feita uma implementação da técnica ACCE através da criação de um passo de transformação de programas para a infraestrutura de compilação LLVM. ACCE atua sobre a linguagem intermediária dos programas compilados com o LLVM, resultando em portabilidade de linguagem de programação e de arquitetura de máquina. Além da implementação da técnica como um passo de transformação, o LLVM foi utilizado para a realização dos experimentos para avaliar o impacto na eficácia de ACCE quando aplicada em programas previamente otimizados por outras transformações. Esse tipo de avaliação é fundamental uma vez que dificilmente a compilação de programas é feita sem a ativação de otimizações, e, até onde sabemos, nunca havia sido feito anteriormente. Os experimentos deste trabalho foram realizados através de baterias de injeção de falhas em programas da suíte de benchmarks Mibench, divididas em diferentes cenários, que avaliaram ACCE em termos de correção de falhas, quando aplicada em programas otimizados por transformações individuais e também por combinações de transformações. Os resultados dos experimentos realizados mostram que a técnica ACCE é eficaz na correção de falhas, porém, para alguns programas otimizados por determinadas transformações, houve redução na correção de falhas. Esse trabalho analisa os experimentos nos quais houve redução da eficácia de ACCE e aponta possíveis causas. / Computer-based systems are used in several eletronic devices that are, in many cases, responsable by the execution of critical tasks. There are situations where techniques of prevention against faults such as software validation and verification, can not be sufficient for ensuring acceptable rates of confiability, because they are not capable of treating faults that occur in execution time, such as transient faults. Most of the fault tolerance techniques for transient faults are focused in detection and correction of control flow errors, that can correspond to 70% errors caused by this kind of faults. These techniques treat the faults at software level, changing the program with the insertion of new instructions that must to capture and to correct illegal jumps occurred during the software execution, being ACCE the most known technique today. In this work an implementation of the ACCE technique was developed as a program transformation pass in the LLVM compiler infraestructurre. ACCE acts over the intermediate language of LLVM, which results in both programming language and machine architecture language portabilities. Besides the implemetation of the technique like a transformation pass, the LLVM was also used in the experiments for the avaliation of impact in the ACCE eficacy when it is applied into programs previously optimized by others compiler transformations. This evaluation is essential since hardly the compilation of programs is made without the activation of other optimizations. As far as we know this kind of evaluation has never beem made before. The experimental results show that the ACCE techinque is effective in the fault correction, but for some programs optimized by specific transformations, there was a reduction in the correction rate. This work analyses these experiments and gives an explanation for what causes a reduction in the effectiveness of ACCE.
|
36 |
Implementação e avaliação da técnica ACCE para detecção e correção de erros de fluxo de controle no LLVM / Implementation and evaluation of the ACCE technique to detection and correction of control flow errors in the LLVMParizi, Rafael Baldiati January 2013 (has links)
Técnicas de prevenção de falhas como testes e verificação de software não são suficientes para prover dependabilidade a sistemas, visto que não são capazes de tratar falhas ocasionadas por eventos externos tais como falhas transientes. Nessas situações faz-se necessária a aplicação de técnicas capazes de tratar e tolerar falhas que ocorram durante a execução do software. Grande parte das técnicas de tolerância a falhas transientes está focada na detecção e correção de erros de fluxo de controle, que podem corresponder a até 70% de erros causados por esse tipo de falha. Essas técnicas tratam as falhas em nível de software, alterando o programa com a inserção de novas instruções que devem capturar e corrigir desvios inesperados ocorridos durante a execução do software, sendo ACCE uma das técnicas mais conhecidas. Neste trabalho foi feita uma implementação da técnica ACCE através da criação de um passo de transformação de programas para a infraestrutura de compilação LLVM. ACCE atua sobre a linguagem intermediária dos programas compilados com o LLVM, resultando em portabilidade de linguagem de programação e de arquitetura de máquina. Além da implementação da técnica como um passo de transformação, o LLVM foi utilizado para a realização dos experimentos para avaliar o impacto na eficácia de ACCE quando aplicada em programas previamente otimizados por outras transformações. Esse tipo de avaliação é fundamental uma vez que dificilmente a compilação de programas é feita sem a ativação de otimizações, e, até onde sabemos, nunca havia sido feito anteriormente. Os experimentos deste trabalho foram realizados através de baterias de injeção de falhas em programas da suíte de benchmarks Mibench, divididas em diferentes cenários, que avaliaram ACCE em termos de correção de falhas, quando aplicada em programas otimizados por transformações individuais e também por combinações de transformações. Os resultados dos experimentos realizados mostram que a técnica ACCE é eficaz na correção de falhas, porém, para alguns programas otimizados por determinadas transformações, houve redução na correção de falhas. Esse trabalho analisa os experimentos nos quais houve redução da eficácia de ACCE e aponta possíveis causas. / Computer-based systems are used in several eletronic devices that are, in many cases, responsable by the execution of critical tasks. There are situations where techniques of prevention against faults such as software validation and verification, can not be sufficient for ensuring acceptable rates of confiability, because they are not capable of treating faults that occur in execution time, such as transient faults. Most of the fault tolerance techniques for transient faults are focused in detection and correction of control flow errors, that can correspond to 70% errors caused by this kind of faults. These techniques treat the faults at software level, changing the program with the insertion of new instructions that must to capture and to correct illegal jumps occurred during the software execution, being ACCE the most known technique today. In this work an implementation of the ACCE technique was developed as a program transformation pass in the LLVM compiler infraestructurre. ACCE acts over the intermediate language of LLVM, which results in both programming language and machine architecture language portabilities. Besides the implemetation of the technique like a transformation pass, the LLVM was also used in the experiments for the avaliation of impact in the ACCE eficacy when it is applied into programs previously optimized by others compiler transformations. This evaluation is essential since hardly the compilation of programs is made without the activation of other optimizations. As far as we know this kind of evaluation has never beem made before. The experimental results show that the ACCE techinque is effective in the fault correction, but for some programs optimized by specific transformations, there was a reduction in the correction rate. This work analyses these experiments and gives an explanation for what causes a reduction in the effectiveness of ACCE.
|
37 |
Open-source Workflow Evaluation : An evaluation of the Activiti BPM PlatformNilsson, Mikael January 2012 (has links)
No description available.
|
38 |
Mitigation of Insider Attacks for Data Security in Distributed Computing EnvironmentsAditham, Santosh 30 March 2017 (has links)
In big data systems, the infrastructure is such that large amounts of data are hosted away from the users. Information security is a major challenge in such systems. From the customer’s perspective, one of the big risks in adopting big data systems is in trusting the service provider who designs and owns the infrastructure, with data security and privacy. However, big data frameworks typically focus on performance and the opportunity for including enhanced security measures is limited. In this dissertation, the problem of mitigating insider attacks is extensively investigated and several static and dynamic run-time techniques are developed. The proposed techniques are targeted at big data systems but applicable to any data system in general.
First, a framework is developed to host the proposed security techniques and integrate with the underlying distributed computing environment. We endorse the idea of deploying this framework on special purpose hardware and a basic model of the software architecture for such security coprocessors is presented. Then, a set of compile-time and run-time techniques are proposed to protect user data from the perpetrators. These techniques target detection of insider attacks that exploit data and infrastructure. The compile-time intrusion detection techniques analyze the control flow by disassembling program binaries while the run-time techniques analyze the memory access patterns of processes running on the system.
The proposed techniques have been implemented as prototypes and extensively tested using big data applications. Experiments were conducted on big data frameworks such as Hadoop and Spark using cloud-based services. Experimental results indicate that the proposed techniques successfully detect insider attacks in the context of data loss, data degradation, data exposure and infrastructure degradation.
|
39 |
A software component model that is both control-driven and data-drivenSafie, Lily Suryani Binti January 2012 (has links)
A software component model is the cornerstone of any Component-based Software Development (CBSD) methodology. Such a model defines the modelling elements for constructing software systems. In software system modelling, it is necessary to capture the three elements of a system's behaviour: (i) control (ii) computation and (iii) data. Within a system, computations are performed according to the flow of control or the flow of data, depending on whether computations are control-driven or data-driven. Computations are function evaluations, assignments, etc., which transform data when invoked by control or data flow. Therefore a component model should be able to model control flow, data flow as well as computations. Current component models all model computations, but beside computations tend to model either control flow only or data flow only, but not both. In this thesis, we present a new component model which can model both control flow and data flow. It contains modelling elements that capture control flow and data flow explicitly. Furthermore, the modelling of control flow is separate from that of data flow; this enables the modelling of both control-driven and data-driven computations. The feasibility of the model is shown by means of an implementation of the model, in the form of a prototype tool. The usefulness of the model is then demonstrated for a specific domain, the embedded systems domain, as well as a generic domain. For the embedded systems domain, unlike current models, our model can be used to construct systems that are both control-driven and data-driven. In a generic domain, our model can be used to construct domain models, by constructing control flows and data flows which together define a domain model.
|
40 |
Optical control of polariton condensation and dipolaritons in coupled quantum wellsCristofolini, Peter January 2015 (has links)
Polaritons are lightweight bosonic quasiparticles that result from the strong coupling of light with an exciton transition inside a microcavity. A sufficiently dense cloud of polaritons condenses into a polariton condensate, a state of matter showing macroscopic coherence and superfluid properties, whose dynamics are influenced by the cycle of constant pumping and decay of polaritons. This thesis begins with an introduction on the particle and wave properties of the polariton condensate, followed by a theoretical description of two-dimensional Bose-Einstein condensation (BEC) and a section on simulation of polariton condensates. The optical setup and the microcavity sample are presented thereafter, including holographic laser shaping with a spatial light modulator (SLM), which allows exciting the microcavity with arbitrarily shaped pump geometries. Experimental results comprise optical control of polariton condensates, and dipolaritons. First, optical blueshift trapping and energy synchronisation (phase locking) of condensates are introduced. The transition from phase-locked condensates to an optically trapped condensate is investigated for a configuration of N pump spots arranged on a circle of varying diameter. Differences between these two condensate types are highlighted in the discussion section. Next, two parallel pump laser lines with small separation are investigated, which create a one-dimensional waveguide with strong uniform gain. Optically guided polaritons are investigated in this configuration with respect to coherence, flow speed, temperature and chemical potential. Observations hint that coherence arises below the condensation threshold simply from the chosen geometry of the system. The final chapter is dedicated to dipolaritons (polaritons with a static dipole moment) which form when polaritons strongly couple to indirect excitons in coupled quantum wells. In this system quantum tunnelling of electrons can be controlled with bias voltage. This allows tuning the dipolariton properties optically and electrically, with exciting prospects for future experiments. A conclusion and outlook section rounds off this work.
|
Page generated in 0.0709 seconds