• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 76
  • 76
  • 41
  • 17
  • 17
  • 15
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Investigating the Reproducbility of NPM packages

Goswami, Pronnoy 19 May 2020 (has links)
The meteoric increase in the popularity of JavaScript and a large developer community has led to the emergence of a large ecosystem of third-party packages available via the Node Package Manager (NPM) repository which contains over one million published packages and witnesses a billion daily downloads. Most of the developers download these pre-compiled published packages from the NPM repository instead of building these packages from the available source code. Unfortunately, recent articles have revealed repackaging attacks to the NPM packages. To achieve such attacks the attackers primarily follow three steps – (1) download the source code of a highly depended upon NPM package, (2) inject malicious code, and (3) then publish the modified packages as either misnamed package (i.e., typo-squatting attack) or as the official package on the NPM repository using compromised maintainer credentials. These attacks highlight the need to verify the reproducibility of NPM packages. Reproducible Build is a concept that allows the verification of build artifacts for pre-compiled packages by re-building the packages using the same build environment configuration documented by the package maintainers. This motivates us to conduct an empirical study (1) to examine the reproducibility of NPM packages, (2) to assess the influence of any non-reproducible packages, and (3) to explore the reasons for non-reproducibility. Firstly, we downloaded all versions/releases of 226 most-depended upon NPM packages, and then built each version with the available source code on Github. Secondly, we applied diffoscope, a differencing tool to compare the versions we built against the version downloaded from the NPM repository. Finally, we did a systematic investigation of the reported differences. At least one version of 65 packages was found to be non-reproducible. Moreover, these non- reproducible packages have been downloaded millions of times per week which could impact a large number of users. Based on our manual inspection and static analysis, most reported differences were semantically equivalent but syntactically different. Such differences result due to non-deterministic factors in the build process. Also, we infer that semantic differences are introduced because of the shortcomings in the JavaScript uglifiers. Our research reveals challenges of verifying the reproducibility of NPM packages with existing tools, reveal the point of failures using case studies, and sheds light on future directions to develop better verification tools. / Master of Science / Software packages are distributed as pre-compiled binaries to facilitate software development. There are various package repositories for various programming languages such as NPM (JavaScript), pip (Python), and Maven (Java). Developers install these pre-compiled packages in their projects to implement certain functionality. Additionally, these package repositories allow developers to publish new packages and help the developer community to reduce the delivery time and enhance the quality of the software product. Unfortunately, recent articles have revealed an increasing number of attacks on the package repositories. Moreover, developers trust the pre-compiled binaries, which often contain malicious code. To address this challenge, we conduct our empirical investigation to analyze the reproducibility of NPM packages for the JavaScript ecosystem. Reproducible Builds is a concept that allows any individual to verify the build artifacts by replicating the build process of software packages. For instance, if the developers could verify that the build artifacts of the pre-compiled software packages available in the NPM repository are identical to the ones generated when they individually build that specific package, they could mitigate and be aware of the vulnerabilities in the software packages. The build process is usually described in configuration files such as package.json and DOCKERFILE. We chose the NPM registry for our study because of three primary reasons – (1) it is the largest package repository, (2) JavaScript is the most widely used programming language, and (3) there is no prior dataset or investigation that has been conducted by researchers. We took a two-step approach in our study – (1) dataset collection, and (2) source-code differencing for each pair of software package versions. For the dataset collection phase, we downloaded all available releases/versions of 226 popularly used NPM packages and for the code-differencing phase, we used an off-the-shelf tool called diffoscope. We revealed some interesting findings. Firstly, at least one of the 65 packages as found to be non-reproducible, and these packages have millions of downloads per week. Secondly, we found 50 package-versions to have divergent program semantics which high- lights the potential vulnerabilities in the source-code and improper build practices. Thirdly, we found that the uglification of JavaScript code introduces non-determinism in the build process. Our research sheds light on the challenges of verifying the reproducibility of NPM packages with the current state-of-the-art tools and the need to develop better verification tools in the future. To conclude, we believe that our work is a step towards realizing the reproducibility of NPM packages and making the community aware of the implications of non-reproducible build artifacts.
22

Improving dynamic analysis with data flow analysis

Chang, Walter Chochen 26 October 2010 (has links)
Many challenges in software quality can be tackled with dynamic analysis. However, these techniques are often limited in their efficiency or scalability as they are often applied uniformly to an entire program. In this thesis, we show that dynamic program analysis can be made significantly more efficient and scalable by first performing a static data flow analysis so that the dynamic analysis can be selectively applied only to important parts of the program. We apply this general principle to the design and implementation of two different systems, one for runtime security policy enforcement and the other for software test input generation. For runtime security policy enforcement, we enforce user-defined policies using a dynamic data flow analysis that is more general and flexible than previous systems. Our system uses the user-defined policy to drive a static data flow analysis that identifies and instruments only the statements that may be involved in a security vulnerability, often eliminating the need to track most objects and greatly reducing the overhead. For taint analysis on a set of five server programs, the slowdown is only 0.65%, two orders of magnitude lower than previous taint tracking systems. Our system also has negligible overhead on file disclosure vulnerabilities, a problem that taint tracking cannot handle. For software test case generation, we introduce the idea of targeted testing, which focuses testing effort on select parts of the program instead of treating all program paths equally. Our “Bullseye” system uses a static analysis performed with respect to user-defined “interesting points” to steer the search down certain paths, thereby finding bugs faster. We also introduce a compiler transformation that allows symbolic execution to automatically perform boundary condition testing, revealing bugs that could be missed even if the correct path is tested. For our set of 9 benchmarks, Bullseye finds bugs an average of 2.5× faster than a conventional depth-first search and finds numerous bugs that DFS could not. In addition, our automated boundary condition testing transformation allows both Bullseye and depth-first search to find numerous bugs that they could not find before, even when all paths were explored. / text
23

An empirical case study on Stack Overflow to explore developers’ security challenges

Rahman, Muhammad Sajidur January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Eugene Vasserman / The unprecedented growth of ubiquitous computing infrastructure has brought new challenges for security, privacy, and trust. New problems range from mobile apps with incomprehensible permission (trust) model to OpenSSL Heartbleed vulnerability, which disrupted the security of a large fraction of the world's web servers. As almost all of the software bugs and flaws boil down to programming errors/misalignment in requirements, we need to retrace back Software Development Life Cycle (SDLC) and supply chain to check and place security & privacy consideration and implementation plan properly. Historically, there has been a divergent point of view between security teams and developers regarding security. Security is often thought of as a "consideration" or "toll gate" within the project plan rather than being built in from the early stage of project planning, development and production cycles. We argue that security can be effectively made into everyone's business in SDLC through a broader exploration of the users and their social-cultural contexts, gaining insight into their mental models of security and privacy and usage patterns of technology, trying to see why and how security practices being satisfied or not-satisfied, then transferring those observations into new tool building and protocol/interaction design. The overall goal in our current study is to understand the common challenges and/or misconceptions regarding security-related issues among developers. In order to investigate into this issue, we conduct a mixed-method analysis on the data obtained from Stack Overflow(SO), one of the most popular on-line QA sites for software developer community to communicate, collaborate, and share information with one another. In this study, we have adopted techniques from mining software repositories research paradigm and have employed topic modeling for analyzing security-related topics in SO dataset. To our knowledge, our work in SO data mining is one of the earliest systematic attempts to understand the roots of challenges, misconceptions, and deterrent factors, if any, among developers while they try to implement security features during software development. We argue that a proper understanding of these issues is a necessary first step towards "build security in" culture in SDLC.
24

A research in SQL injection.

January 2005 (has links)
Leung Siu Kuen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 67-68). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.1.1 --- A Story --- p.1 / Chapter 1.2 --- Overview --- p.2 / Chapter 1.2.1 --- Introduction of SQL Injection --- p.4 / Chapter 1.3 --- The importance of SQL Injection --- p.6 / Chapter 1.4 --- Thesis organization --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Flow of web applications using DBMS --- p.10 / Chapter 2.2 --- Structure of DBMS --- p.12 / Chapter 2.2.1 --- Tables --- p.12 / Chapter 2.2.2 --- Columns --- p.12 / Chapter 2.2.3 --- Rows --- p.12 / Chapter 2.3 --- SQL Syntax --- p.13 / Chapter 2.3.1 --- SELECT --- p.13 / Chapter 2.3.2 --- AND/OR --- p.14 / Chapter 2.3.3 --- INSERT --- p.15 / Chapter 2.3.4 --- UPDATE --- p.16 / Chapter 2.3.5 --- DELETE --- p.17 / Chapter 2.3.6 --- UNION --- p.18 / Chapter 3 --- Details of SQL Injection --- p.20 / Chapter 3.1 --- Basic SELECT Injection --- p.20 / Chapter 3.2 --- Advanced SELECT Injection --- p.23 / Chapter 3.2.1 --- Single Line Comment (--) --- p.23 / Chapter 3.2.2 --- Guessing the number of columns in a table --- p.23 / Chapter 3.2.3 --- Guessing the column name of a table (Easy one) --- p.26 / Chapter 3.2.4 --- Guessing the column name of a table (Difficult one) . --- p.27 / Chapter 3.3 --- UPDATE Injection --- p.29 / Chapter 3.4 --- Other Attacks --- p.30 / Chapter 4 --- Current Defenses --- p.32 / Chapter 4.1 --- Causes of SQL Injection attacks --- p.32 / Chapter 4.2 --- Defense Methods --- p.33 / Chapter 4.2.1 --- Defensive Programming --- p.34 / Chapter 4.2.2 --- hiding the error messages --- p.35 / Chapter 4.2.3 --- Filtering out the dangerous characters --- p.35 / Chapter 4.2.4 --- Using pre-complied SQL statements --- p.36 / Chapter 4.2.5 --- Checking for tautologies in SQL statements --- p.37 / Chapter 4.2.6 --- Instruction set randomization --- p.38 / Chapter 4.2.7 --- Building the query model --- p.40 / Chapter 5 --- Proposed Solution --- p.43 / Chapter 5.1 --- Introduction --- p.43 / Chapter 5.2 --- Natures of SQL Injection --- p.43 / Chapter 5.3 --- Our proposed system --- p.44 / Chapter 5.3.1 --- Features of the system --- p.44 / Chapter 5.3.2 --- Stage 1 - Checking with current signatures --- p.45 / Chapter 5.3.3 --- Stage 2 - SQL Server Query --- p.45 / Chapter 5.3.4 --- Stage 3 - Error Triggering --- p.46 / Chapter 5.3.5 --- Stage 4 - Alarm --- p.50 / Chapter 5.3.6 --- Stage 5 - Learning --- p.50 / Chapter 5.4 --- Examples --- p.51 / Chapter 5.4.1 --- Defensing BASIC SELECT Injection --- p.52 / Chapter 5.4.2 --- Defensing Advanced SELECT Injection --- p.52 / Chapter 5.4.3 --- Defensing UPDATE Injection --- p.57 / Chapter 5.5 --- Comparison --- p.59 / Chapter 6 --- Conclusion --- p.62 / Chapter A --- Commonly used table and column names --- p.64 / Chapter A.1 --- Commonly used table names for system management --- p.64 / Chapter A.2 --- Commonly used column names for password storage --- p.65 / Chapter A.3 --- Commonly used column names for username storage --- p.66 / Bibliography --- p.67
25

Dynamic Application Level Security Sensors

Rathgeb, Christopher Thomas 01 May 2010 (has links)
The battle for cyber supremacy is a cat and mouse game: evolving threats from internal and external sources make it difficult to protect critical systems. With the diverse and high risk nature of these threats, there is a need for robust techniques that can quickly adapt and address this evolution. Existing tools such as Splunk, Snort, and Bro help IT administrators defend their networks by actively parsing through network traffic or system log data. These tools have been thoroughly developed and have proven to be a formidable defense against many cyberattacks. However, they are vulnerable to zero-day attacks, slow attacks, and attacks that originate from within. Should an attacker or some form of malware make it through these barriers and onto a system, the next layer of defense lies on the host. Host level defenses include system integrity verifiers, virus scanners, and event log parsers. Many of these tools work by seeking specific attack signatures or looking for anomalous events. The defenses at the network and host level are similar in nature. First, sensors collect data from the security domain. Second, the data is processed, and third, a response is crafted based on the processing. The application level security domain lacks this three step process. Application level defenses focus on secure coding practices and vulnerability patching, which is ineffective. The work presented in this thesis uses a technique that is commonly employed by malware, dynamic-link library (DLL) injection, to develop dynamic application level security sensors that can extract fine-grain data at runtime. This data can then be processed to provide stronger application level defense by shrinking the vulnerability window. Chapters 5 and 6 give proof of concept sensors and describe the process of developing the sensors in detail.
26

A Model and Implementation of a Security plug-in for the Software Life Cycle

Ardi, Shanai January 2008 (has links)
<p>Currently, security is frequently considered late in software life cycle. It is often bolted on late in development, or even during deployment or maintenance, through activities such as add-on security software and penetration-and-patch maintenance. Even if software developers aim to incorporate security into their products from the beginning of the software life cycle, they face an exhaustive amount of ad hoc unstructured information without any practical guidance on how and why this information should be used and what the costs and benefits of using it are. This is due to a lack of structured methods.</p><p>In this thesis we present a model for secure software development and implementation of a security plug-in that deploys this model in software life cycle. The model is a structured unified process, named S3P (Sustainable Software Security Process) and is designed to be easily adaptable to any software development process. S3P provides the formalism required to identify the causes of vulnerabilities and the mitigation techniques that address these causes to prevent vulnerabilities. We present a prototype of the security plug-in implemented for the OpenUP/Basic development process in Eclipse Process Framework. We also present the results of the evaluation of this plug-in. The work in this thesis is a first step towards a general framework for introducing security into the software life cycle and to support software process improvements to prevent recurrence of software vulnerabilities.</p> / Report code: LiU-Tek-Lic-2008:11.
27

Dynamic Application Level Security Sensors

Rathgeb, Christopher Thomas 01 May 2010 (has links)
The battle for cyber supremacy is a cat and mouse game: evolving threats from internal and external sources make it difficult to protect critical systems. With the diverse and high risk nature of these threats, there is a need for robust techniques that can quickly adapt and address this evolution. Existing tools such as Splunk, Snort, and Bro help IT administrators defend their networks by actively parsing through network traffic or system log data. These tools have been thoroughly developed and have proven to be a formidable defense against many cyberattacks. However, they are vulnerable to zero-day attacks, slow attacks, and attacks that originate from within. Should an attacker or some form of malware make it through these barriers and onto a system, the next layer of defense lies on the host. Host level defenses include system integrity verifiers, virus scanners, and event log parsers. Many of these tools work by seeking specific attack signatures or looking for anomalous events. The defenses at the network and host level are similar in nature. First, sensors collect data from the security domain. Second, the data is processed, and third, a response is crafted based on the processing. The application level security domain lacks this three step process. Application level defenses focus on secure coding practices and vulnerability patching, which is ineffective. The work presented in this thesis uses a technique that is commonly employed by malware, dynamic-link library (DLL) injection, to develop dynamic application level security sensors that can extract fine-grain data at runtime. This data can then be processed to provide stronger application level defense by shrinking the vulnerability window. Chapters 5 and 6 give proof of concept sensors and describe the process of developing the sensors in detail.
28

USING COMPLEXITY, COUPLING, AND COHESION METRICS AS EARLY INDICATORS OF VULNERABILITIES

Chowdhury, Istehad 28 September 2009 (has links)
Software security failures are common and the problem is growing. A vulnerability is a weakness in the software that, when exploited, causes a security failure. It is difficult to detect vulnerabilities until they manifest themselves as security failures in the operational stage of the software, because security concerns are often not addressed or known sufficiently early during the Software Development Life Cycle (SDLC). Complexity, coupling, and cohesion (CCC) related software metrics can be measured during the early phases of software development such as design or coding. Although these metrics have been successfully employed to indicate software faults in general, the relationships between CCC metrics and vulnerabilities have not been extensively investigated yet. If empirical relationships can be discovered between CCC metrics and vulnerabilities, these metrics could aid software developers to take proactive actions against potential vulnerabilities in software. In this thesis, we investigate whether CCC metrics can be utilized as early indicators of software vulnerabilities. We conduct an extensive case study on several releases of Mozilla Firefox to provide empirical evidence on how vulnerabilities are related to complexity, coupling, and cohesion. We mine the vulnerability databases, bug databases, and version archives of Mozilla Firefox to map vulnerabilities to software entities. It is found that some of the CCC metrics are correlated to vulnerabilities at a statistically significant level. Since different metrics are available at different development phases, we further examine the correlations to determine which level (design or code) of CCC metrics are better indicators of vulnerabilities. We also observe that the correlation patterns are stable across multiple releases. These observations imply that the metrics can be dependably used as early indicators of vulnerabilities in software. We then present a framework to automatically predict vulnerabilities based on CCC metrics. To build vulnerability predictors, we consider four alternative data mining and statistical techniques – C4.5 Decision Tree, Random Forests, Logistic Regression, and Naïve-Bayes – and compare their prediction performances. We are able to predict majority of the vulnerability-prone files in Mozilla Firefox, with tolerable false positive rates. Moreover, the predictors built from the past releases can reliably predict the likelihood of having vulnerabilities in future releases. The experimental results indicate that structural information from the non-security realm such as complexity, coupling, and cohesion are useful in vulnerability prediction. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-09-24 17:31:36.581
29

DESERVE: A FRAMEWORK FOR DETECTING PROGRAM SECURITY VULNERABILITY EXPLOITATIONS

MOHOSINA, AMATUL 20 September 2011 (has links)
It is difficult to develop a program that is completely free from vulnerabilities. Despite the applications of many approaches to secure programs, vulnerability exploitations occur in real world in large numbers. Exploitations of vulnerabilities may corrupt memory spaces and program states, lead to denial of services and authorization bypassing, provide attackers the access to authorization information, and leak sensitive information. Monitoring at the program code level can be a way of vulnerability exploitation detection at runtime. In this work, we propose a monitor embedding framework DESERVE (a framework for DEtecting program SEcuRity Vulnerability Exploitations). DESERVE identifies exploitable statements from source code based on static backward slicing and embeds necessary code to detect attacks. During the deployment stage, the enhanced programs execute exploitable statements in a separate test environment. Unlike traditional monitors that extract and store program state information to compare with vulnerable free program states to detect exploitation, our approach does not need to save state information. Moreover, the slicing technique allows us to avoid the tracking of fine grained level of information about runtime program environments such as input flow and memory state. We implement DESERVE for detecting buffer overflow, SQL injection, and cross-site scripting attacks. We evaluate our approach for real world programs implemented in C and PHP languages. The results show that the approach can detect some of the well-known attacks. Moreover, the approach imposes negligible runtime overhead. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-09-19 19:04:28.423
30

A Framework for Security Requirements Elicitation

Islam, Gibrail, Qureshi, Murtaza Ali January 2012 (has links)
Context: Security considerations are typically incorporated in the later stages of development as an afterthought. Security in software system is put under the category of non-functional requirements by the researchers. Understanding the security needs of a system requires considerable knowledge of assets, data security, integrity, confidentiality and availability of services. Counter measures against software attacks are also a security need of a software system. To incorporate security in the earliest stages, i.e. requirement gathering, helps building secure software systems from the start. For that purpose researchers have proposed different requirements elicitation techniques. These techniques are categorized into formal and informal techniques on the basis of finiteness and clarity in activities of the techniques. Objectives: Limitations of formal methods and lack of systematic approaches in informal elicitation techniques make it difficult to rely on a single technique for security requirements elicitation. Therefore we decided to utilize the strengths of formal and informal technique to mitigate their weaknesses by combining widely used formal and informal security requirements elicitation techniques. The basic idea of our research was to integrate an informal technique with a formal technique and propose a flexible framework with some level of formality in the steps. Methods: We conducted a systematic literature review to see “which are the widely used security requirement elicitation techniques?” as a pre-study for our thesis? We searched online databases i.e. ISI, IEEE Xplore, ACM, Springer, Inspec and compendeX. We also conducted a literature review for different frameworks that are used in industry, for security requirement elicitation. We conducted an experiment after proposing a security requirements elicitation Framework and compared the result from the Framework with that of CLASP and Misuse cases. Results:Two types of analysis were conducted on results from the experiment: Vulnerability analysis and Requirements analysis with respect to a security baseline. Vulnerability analysis shows that the proposed framework mitigates more vulnerabilities than CLASP and Misuse Cases. Requirements analysis with respect to the security baseline shows that the proposed framework, unlike CLASP and Misuse cases, covers all the security baseline features. Conclusions:The framework we have proposed by combining CLASP, Misuse cases and Secure TROPOS contains the strengths of three security requirements elicitation techniques. To make the proposed framework even more effective, we also included the security requirements categorization by Bogale and Ahmed [11]. The framework is flexible and contains fifteen steps to elicit security requirements. In addition it also allows iterations to improve security in a system

Page generated in 0.0481 seconds