Spelling suggestions: "subject:"5oftware - 2security"" "subject:"5oftware - bsecurity""
21 |
HetMigrate: Secure and Efficient Cross-architecture Process Live MigrationBapat, Abhishek Mandar 31 January 2023 (has links)
The slowdown of Moore's Law opened a new era of computer research and development. Researchers started exploring alternatives to the traditional CPU design. A constant increase in consumer demands led to the development of CMPs, GPUs, and FPGAs. Recent research proposed the development of heterogeneous-ISA systems and implemented the necessary systems software to make such systems functional. Evaluations have shown that heterogeneous-ISA systems can offer better throughput and energy efficiency than homogeneous-ISA systems. Due to their low cost, ARM servers are now being adopted in data centers (e.g., AWS Graviton). While prior work provided the infrastructure necessary to run applications on heterogeneous-ISA systems, their dependency on a specialized kernel and a custom compiler increases deployment and maintenance costs. This thesis presents HetMigrate, a framework to live-migrate Linux processes over heterogeneous-ISA systems. HetMigrate integrates with CRIU, a Linux mechanism for process migration, and runs on stock Linux operating systems which improves its deployability. Furthermore, HetMigrate transforms the process's state externally without instrumenting state transformation code into the process binaries which has security benefits and also improves deployability. Our evaluations on Redis server and NAS Parallel Benchmarks show that HetMigrate takes an average of 720ms to fully migrate a process across ISAs while maintaining its state. Moreover, live-migrating with HetMigrate reduces the attack surface of a process by up to 72.8% compared to prior work. Additionally, HetMigrate is easier to deploy in real-world systems compared to prior work. To prove the deployability we ran HetMigrate on a variety of environments like cloud instances (e.g. Cloud Lab), local setups virtualized with QEMU/KVM, and a server-embedded board pair. Similar to works in the past, we also evaluated the energy and throughput benefits that heterogeneous-ISA systems can offer by connecting a Xeon server to three embedded boards over the network. We observed that selectively offloading compute-intensive workloads to embedded boards can increase energy efficiency by up to 39% and throughput by up to 52% while increasing the cost by just 10%. / Master of Science / In 1965 Gordon Moore predicted that the number of transistors in a chip will double every two years. Commonly referred to as "Moore's Law" it no longer holds true and its slowdown opened a new era of computer research and development. Researchers started exploring alternatives to traditional computer designs. A constant increase in consumer demands led to the development of portable, faster, and cheaper computers. Some researchers also started exploring the performance and energy benefits of computing systems that had heterogeneous architecture. Instruction Set Architecture (ISA) is the interface between software and hardware. Recent research proposed the development of systems that had cores of different ISA and implemented the necessary software to make such systems functional. Evaluations have shown that heterogeneous-ISA systems can offer better throughput and energy efficiency than traditional systems. To decrease their cost-to-performance ratio data centers have started adopting servers belonging to diverse architectures making them heterogeneous in nature. While prior work provided the infrastructure necessary to run applications on heterogeneous systems, it suffered from deployability limitations. This thesis presents HetMigrate, a framework that enables stateful program migration in heterogeneous systems. HetMigrate runs on stock open-source operating systems which makes it easy to deploy. Our evaluations show that while HetMigrate performs two orders of magnitude slower than prior work, it can be deployed with ease.
|
22 |
Improving dynamic analysis with data flow analysisChang, Walter Chochen 26 October 2010 (has links)
Many challenges in software quality can be tackled with dynamic analysis. However, these techniques are often limited in their efficiency or scalability as they are often applied uniformly to an entire program. In this thesis, we show that dynamic program analysis can be made significantly more efficient and scalable by first performing a static data flow analysis so that the dynamic analysis can be selectively applied only to important parts of the program. We apply this general principle to the design and implementation of two different systems, one for runtime security policy enforcement and the other for software test input generation.
For runtime security policy enforcement, we enforce user-defined policies using a dynamic data flow analysis that is more general and flexible than previous systems. Our system uses the user-defined policy to drive a static data flow analysis that identifies and instruments only the statements that may be involved in a security vulnerability, often eliminating the need to track most objects and greatly reducing the overhead. For taint analysis on a set of five
server programs, the slowdown is only 0.65%, two orders of magnitude lower than previous taint tracking systems. Our system also has negligible overhead on file disclosure vulnerabilities, a problem that taint tracking cannot handle.
For software test case generation, we introduce the idea of targeted testing, which focuses testing effort on select parts of the program instead of treating all program paths equally. Our “Bullseye” system uses a static analysis performed with respect to user-defined “interesting points” to steer the search down certain paths, thereby finding bugs faster. We also introduce a compiler transformation that allows symbolic execution to automatically perform boundary condition testing, revealing bugs that could be missed even if the correct path is tested. For our set of 9 benchmarks, Bullseye finds bugs an average of 2.5× faster than a conventional depth-first search and finds numerous bugs that DFS could not. In addition, our automated boundary condition testing transformation allows both Bullseye and depth-first search to find numerous bugs that they could not find before, even when all paths were explored. / text
|
23 |
An empirical case study on Stack Overflow to explore developers’ security challengesRahman, Muhammad Sajidur January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Eugene Vasserman / The unprecedented growth of ubiquitous computing infrastructure has brought new challenges for security, privacy, and trust. New problems range from mobile apps with incomprehensible permission (trust) model to OpenSSL Heartbleed vulnerability, which disrupted the security of a large fraction of the world's web servers. As almost all of the software bugs and flaws boil down to programming errors/misalignment in requirements, we need to retrace back Software Development Life Cycle (SDLC) and supply chain to check and place security & privacy consideration and implementation plan properly.
Historically, there has been a divergent point of view between security teams and developers regarding security. Security is often thought of as a "consideration" or "toll gate" within the project plan rather than being built in from the early stage of project planning, development and production cycles. We argue that security can be effectively made into everyone's business in SDLC through a broader exploration of the users and their social-cultural contexts, gaining insight into their mental models of security and privacy and usage patterns of technology, trying to see why and how security practices being satisfied or not-satisfied, then transferring those observations into new tool building and protocol/interaction design.
The overall goal in our current study is to understand the common challenges and/or misconceptions regarding security-related issues among developers. In order to investigate into this issue, we conduct a mixed-method analysis on the data obtained from Stack Overflow(SO), one of the most popular on-line QA sites for software developer community to communicate, collaborate, and share information with one another. In this study, we have adopted techniques from mining software repositories research paradigm and have employed topic modeling for analyzing security-related topics in SO dataset. To our knowledge, our work in SO data mining is one of the earliest systematic attempts to understand the roots of challenges, misconceptions, and deterrent factors, if any, among developers while they try to implement security features during software development. We argue that a proper understanding of these issues is a necessary first step towards "build security in" culture in SDLC.
|
24 |
A research in SQL injection.January 2005 (has links)
Leung Siu Kuen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 67-68). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.1.1 --- A Story --- p.1 / Chapter 1.2 --- Overview --- p.2 / Chapter 1.2.1 --- Introduction of SQL Injection --- p.4 / Chapter 1.3 --- The importance of SQL Injection --- p.6 / Chapter 1.4 --- Thesis organization --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Flow of web applications using DBMS --- p.10 / Chapter 2.2 --- Structure of DBMS --- p.12 / Chapter 2.2.1 --- Tables --- p.12 / Chapter 2.2.2 --- Columns --- p.12 / Chapter 2.2.3 --- Rows --- p.12 / Chapter 2.3 --- SQL Syntax --- p.13 / Chapter 2.3.1 --- SELECT --- p.13 / Chapter 2.3.2 --- AND/OR --- p.14 / Chapter 2.3.3 --- INSERT --- p.15 / Chapter 2.3.4 --- UPDATE --- p.16 / Chapter 2.3.5 --- DELETE --- p.17 / Chapter 2.3.6 --- UNION --- p.18 / Chapter 3 --- Details of SQL Injection --- p.20 / Chapter 3.1 --- Basic SELECT Injection --- p.20 / Chapter 3.2 --- Advanced SELECT Injection --- p.23 / Chapter 3.2.1 --- Single Line Comment (--) --- p.23 / Chapter 3.2.2 --- Guessing the number of columns in a table --- p.23 / Chapter 3.2.3 --- Guessing the column name of a table (Easy one) --- p.26 / Chapter 3.2.4 --- Guessing the column name of a table (Difficult one) . --- p.27 / Chapter 3.3 --- UPDATE Injection --- p.29 / Chapter 3.4 --- Other Attacks --- p.30 / Chapter 4 --- Current Defenses --- p.32 / Chapter 4.1 --- Causes of SQL Injection attacks --- p.32 / Chapter 4.2 --- Defense Methods --- p.33 / Chapter 4.2.1 --- Defensive Programming --- p.34 / Chapter 4.2.2 --- hiding the error messages --- p.35 / Chapter 4.2.3 --- Filtering out the dangerous characters --- p.35 / Chapter 4.2.4 --- Using pre-complied SQL statements --- p.36 / Chapter 4.2.5 --- Checking for tautologies in SQL statements --- p.37 / Chapter 4.2.6 --- Instruction set randomization --- p.38 / Chapter 4.2.7 --- Building the query model --- p.40 / Chapter 5 --- Proposed Solution --- p.43 / Chapter 5.1 --- Introduction --- p.43 / Chapter 5.2 --- Natures of SQL Injection --- p.43 / Chapter 5.3 --- Our proposed system --- p.44 / Chapter 5.3.1 --- Features of the system --- p.44 / Chapter 5.3.2 --- Stage 1 - Checking with current signatures --- p.45 / Chapter 5.3.3 --- Stage 2 - SQL Server Query --- p.45 / Chapter 5.3.4 --- Stage 3 - Error Triggering --- p.46 / Chapter 5.3.5 --- Stage 4 - Alarm --- p.50 / Chapter 5.3.6 --- Stage 5 - Learning --- p.50 / Chapter 5.4 --- Examples --- p.51 / Chapter 5.4.1 --- Defensing BASIC SELECT Injection --- p.52 / Chapter 5.4.2 --- Defensing Advanced SELECT Injection --- p.52 / Chapter 5.4.3 --- Defensing UPDATE Injection --- p.57 / Chapter 5.5 --- Comparison --- p.59 / Chapter 6 --- Conclusion --- p.62 / Chapter A --- Commonly used table and column names --- p.64 / Chapter A.1 --- Commonly used table names for system management --- p.64 / Chapter A.2 --- Commonly used column names for password storage --- p.65 / Chapter A.3 --- Commonly used column names for username storage --- p.66 / Bibliography --- p.67
|
25 |
Dynamic Application Level Security SensorsRathgeb, Christopher Thomas 01 May 2010 (has links)
The battle for cyber supremacy is a cat and mouse game: evolving threats from internal and external sources make it difficult to protect critical systems. With the diverse and high risk nature of these threats, there is a need for robust techniques that can quickly adapt and address this evolution. Existing tools such as Splunk, Snort, and Bro help IT administrators defend their networks by actively parsing through network traffic or system log data. These tools have been thoroughly developed and have proven to be a formidable defense against many cyberattacks. However, they are vulnerable to zero-day attacks, slow attacks, and attacks that originate from within. Should an attacker or some form of malware make it through these barriers and onto a system, the next layer of defense lies on the host. Host level defenses include system integrity verifiers, virus scanners, and event log parsers. Many of these tools work by seeking specific attack signatures or looking for anomalous events. The defenses at the network and host level are similar in nature. First, sensors collect data from the security domain. Second, the data is processed, and third, a response is crafted based on the processing. The application level security domain lacks this three step process. Application level defenses focus on secure coding practices and vulnerability patching, which is ineffective. The work presented in this thesis uses a technique that is commonly employed by malware, dynamic-link library (DLL) injection, to develop dynamic application level security sensors that can extract fine-grain data at runtime. This data can then be processed to provide stronger application level defense by shrinking the vulnerability window. Chapters 5 and 6 give proof of concept sensors and describe the process of developing the sensors in detail.
|
26 |
A Model and Implementation of a Security plug-in for the Software Life CycleArdi, Shanai January 2008 (has links)
<p>Currently, security is frequently considered late in software life cycle. It is often bolted on late in development, or even during deployment or maintenance, through activities such as add-on security software and penetration-and-patch maintenance. Even if software developers aim to incorporate security into their products from the beginning of the software life cycle, they face an exhaustive amount of ad hoc unstructured information without any practical guidance on how and why this information should be used and what the costs and benefits of using it are. This is due to a lack of structured methods.</p><p>In this thesis we present a model for secure software development and implementation of a security plug-in that deploys this model in software life cycle. The model is a structured unified process, named S3P (Sustainable Software Security Process) and is designed to be easily adaptable to any software development process. S3P provides the formalism required to identify the causes of vulnerabilities and the mitigation techniques that address these causes to prevent vulnerabilities. We present a prototype of the security plug-in implemented for the OpenUP/Basic development process in Eclipse Process Framework. We also present the results of the evaluation of this plug-in. The work in this thesis is a first step towards a general framework for introducing security into the software life cycle and to support software process improvements to prevent recurrence of software vulnerabilities.</p> / Report code: LiU-Tek-Lic-2008:11.
|
27 |
Dynamic Application Level Security SensorsRathgeb, Christopher Thomas 01 May 2010 (has links)
The battle for cyber supremacy is a cat and mouse game: evolving threats from internal and external sources make it difficult to protect critical systems. With the diverse and high risk nature of these threats, there is a need for robust techniques that can quickly adapt and address this evolution. Existing tools such as Splunk, Snort, and Bro help IT administrators defend their networks by actively parsing through network traffic or system log data. These tools have been thoroughly developed and have proven to be a formidable defense against many cyberattacks. However, they are vulnerable to zero-day attacks, slow attacks, and attacks that originate from within. Should an attacker or some form of malware make it through these barriers and onto a system, the next layer of defense lies on the host. Host level defenses include system integrity verifiers, virus scanners, and event log parsers. Many of these tools work by seeking specific attack signatures or looking for anomalous events. The defenses at the network and host level are similar in nature. First, sensors collect data from the security domain. Second, the data is processed, and third, a response is crafted based on the processing. The application level security domain lacks this three step process. Application level defenses focus on secure coding practices and vulnerability patching, which is ineffective. The work presented in this thesis uses a technique that is commonly employed by malware, dynamic-link library (DLL) injection, to develop dynamic application level security sensors that can extract fine-grain data at runtime. This data can then be processed to provide stronger application level defense by shrinking the vulnerability window. Chapters 5 and 6 give proof of concept sensors and describe the process of developing the sensors in detail.
|
28 |
USING COMPLEXITY, COUPLING, AND COHESION METRICS AS EARLY INDICATORS OF VULNERABILITIESChowdhury, Istehad 28 September 2009 (has links)
Software security failures are common and the problem is growing. A vulnerability is a weakness in the software that, when exploited, causes a security failure. It is difficult to detect vulnerabilities until they manifest themselves as security failures in the operational stage of the software, because security concerns are often not addressed or known sufficiently early during the Software Development Life Cycle (SDLC). Complexity, coupling, and cohesion (CCC) related software metrics can be measured during the early phases of software development such as design or coding. Although these metrics have been successfully employed to indicate software faults in general, the relationships between CCC metrics and vulnerabilities have not been extensively investigated yet. If empirical relationships can be discovered between CCC metrics and vulnerabilities, these metrics could aid software developers to take proactive actions against potential vulnerabilities in software.
In this thesis, we investigate whether CCC metrics can be utilized as early indicators of software vulnerabilities. We conduct an extensive case study on several releases of Mozilla Firefox to provide empirical evidence on how vulnerabilities are related to complexity, coupling, and cohesion. We mine the vulnerability databases, bug databases, and version archives of Mozilla Firefox to map vulnerabilities to software entities. It is found that some of the CCC metrics are correlated to vulnerabilities at a statistically significant level. Since different metrics are available at different development phases, we further examine the correlations to determine which level (design or code) of CCC metrics are better indicators of vulnerabilities. We also observe that the correlation patterns are stable across multiple releases. These observations imply that the metrics can be dependably used as early indicators of vulnerabilities in software.
We then present a framework to automatically predict vulnerabilities based on CCC metrics. To build vulnerability predictors, we consider four alternative data mining and statistical techniques – C4.5 Decision Tree, Random Forests, Logistic Regression, and Naïve-Bayes – and compare their prediction performances. We are able to predict majority of the vulnerability-prone files in Mozilla Firefox, with tolerable false positive rates. Moreover, the predictors built from the past releases can reliably predict the likelihood of having vulnerabilities in future releases. The experimental results indicate that structural information from the non-security realm such as complexity, coupling, and cohesion are useful in vulnerability prediction. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-09-24 17:31:36.581
|
29 |
DESERVE: A FRAMEWORK FOR DETECTING PROGRAM SECURITY VULNERABILITY EXPLOITATIONSMOHOSINA, AMATUL 20 September 2011 (has links)
It is difficult to develop a program that is completely free from vulnerabilities. Despite the applications of many approaches to secure programs, vulnerability exploitations occur in real world in large numbers. Exploitations of vulnerabilities may corrupt memory spaces and program states, lead to denial of services and authorization bypassing, provide attackers the access to authorization information, and leak sensitive information. Monitoring at the program code level can be a way of vulnerability exploitation detection at runtime. In this work, we propose a monitor embedding framework DESERVE (a framework for DEtecting program SEcuRity Vulnerability Exploitations). DESERVE identifies exploitable statements from source code based on static backward slicing and embeds necessary code to detect attacks. During the deployment stage, the enhanced programs execute exploitable statements in a separate test environment. Unlike traditional monitors that extract and store program state information to compare with vulnerable free program states to detect exploitation, our approach does not need to save state information. Moreover, the slicing technique allows us to avoid the tracking of fine grained level of information about runtime program environments such as input flow and memory state. We implement DESERVE for detecting buffer overflow, SQL injection, and cross-site scripting attacks. We evaluate our approach for real world programs implemented in C and PHP languages. The results show that the approach can detect some of the well-known attacks. Moreover, the approach imposes negligible runtime overhead. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-09-19 19:04:28.423
|
30 |
A Framework for Security Requirements ElicitationIslam, Gibrail, Qureshi, Murtaza Ali January 2012 (has links)
Context: Security considerations are typically incorporated in the later stages of development as an afterthought. Security in software system is put under the category of non-functional requirements by the researchers. Understanding the security needs of a system requires considerable knowledge of assets, data security, integrity, confidentiality and availability of services. Counter measures against software attacks are also a security need of a software system. To incorporate security in the earliest stages, i.e. requirement gathering, helps building secure software systems from the start. For that purpose researchers have proposed different requirements elicitation techniques. These techniques are categorized into formal and informal techniques on the basis of finiteness and clarity in activities of the techniques. Objectives: Limitations of formal methods and lack of systematic approaches in informal elicitation techniques make it difficult to rely on a single technique for security requirements elicitation. Therefore we decided to utilize the strengths of formal and informal technique to mitigate their weaknesses by combining widely used formal and informal security requirements elicitation techniques. The basic idea of our research was to integrate an informal technique with a formal technique and propose a flexible framework with some level of formality in the steps. Methods: We conducted a systematic literature review to see “which are the widely used security requirement elicitation techniques?” as a pre-study for our thesis? We searched online databases i.e. ISI, IEEE Xplore, ACM, Springer, Inspec and compendeX. We also conducted a literature review for different frameworks that are used in industry, for security requirement elicitation. We conducted an experiment after proposing a security requirements elicitation Framework and compared the result from the Framework with that of CLASP and Misuse cases. Results:Two types of analysis were conducted on results from the experiment: Vulnerability analysis and Requirements analysis with respect to a security baseline. Vulnerability analysis shows that the proposed framework mitigates more vulnerabilities than CLASP and Misuse Cases. Requirements analysis with respect to the security baseline shows that the proposed framework, unlike CLASP and Misuse cases, covers all the security baseline features. Conclusions:The framework we have proposed by combining CLASP, Misuse cases and Secure TROPOS contains the strengths of three security requirements elicitation techniques. To make the proposed framework even more effective, we also included the security requirements categorization by Bogale and Ahmed [11]. The framework is flexible and contains fifteen steps to elicit security requirements. In addition it also allows iterations to improve security in a system
|
Page generated in 0.056 seconds