• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 7
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 68
  • 36
  • 15
  • 14
  • 13
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A research in SQL injection.

January 2005 (has links)
Leung Siu Kuen. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 67-68). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.1.1 --- A Story --- p.1 / Chapter 1.2 --- Overview --- p.2 / Chapter 1.2.1 --- Introduction of SQL Injection --- p.4 / Chapter 1.3 --- The importance of SQL Injection --- p.6 / Chapter 1.4 --- Thesis organization --- p.8 / Chapter 2 --- Background --- p.10 / Chapter 2.1 --- Flow of web applications using DBMS --- p.10 / Chapter 2.2 --- Structure of DBMS --- p.12 / Chapter 2.2.1 --- Tables --- p.12 / Chapter 2.2.2 --- Columns --- p.12 / Chapter 2.2.3 --- Rows --- p.12 / Chapter 2.3 --- SQL Syntax --- p.13 / Chapter 2.3.1 --- SELECT --- p.13 / Chapter 2.3.2 --- AND/OR --- p.14 / Chapter 2.3.3 --- INSERT --- p.15 / Chapter 2.3.4 --- UPDATE --- p.16 / Chapter 2.3.5 --- DELETE --- p.17 / Chapter 2.3.6 --- UNION --- p.18 / Chapter 3 --- Details of SQL Injection --- p.20 / Chapter 3.1 --- Basic SELECT Injection --- p.20 / Chapter 3.2 --- Advanced SELECT Injection --- p.23 / Chapter 3.2.1 --- Single Line Comment (--) --- p.23 / Chapter 3.2.2 --- Guessing the number of columns in a table --- p.23 / Chapter 3.2.3 --- Guessing the column name of a table (Easy one) --- p.26 / Chapter 3.2.4 --- Guessing the column name of a table (Difficult one) . --- p.27 / Chapter 3.3 --- UPDATE Injection --- p.29 / Chapter 3.4 --- Other Attacks --- p.30 / Chapter 4 --- Current Defenses --- p.32 / Chapter 4.1 --- Causes of SQL Injection attacks --- p.32 / Chapter 4.2 --- Defense Methods --- p.33 / Chapter 4.2.1 --- Defensive Programming --- p.34 / Chapter 4.2.2 --- hiding the error messages --- p.35 / Chapter 4.2.3 --- Filtering out the dangerous characters --- p.35 / Chapter 4.2.4 --- Using pre-complied SQL statements --- p.36 / Chapter 4.2.5 --- Checking for tautologies in SQL statements --- p.37 / Chapter 4.2.6 --- Instruction set randomization --- p.38 / Chapter 4.2.7 --- Building the query model --- p.40 / Chapter 5 --- Proposed Solution --- p.43 / Chapter 5.1 --- Introduction --- p.43 / Chapter 5.2 --- Natures of SQL Injection --- p.43 / Chapter 5.3 --- Our proposed system --- p.44 / Chapter 5.3.1 --- Features of the system --- p.44 / Chapter 5.3.2 --- Stage 1 - Checking with current signatures --- p.45 / Chapter 5.3.3 --- Stage 2 - SQL Server Query --- p.45 / Chapter 5.3.4 --- Stage 3 - Error Triggering --- p.46 / Chapter 5.3.5 --- Stage 4 - Alarm --- p.50 / Chapter 5.3.6 --- Stage 5 - Learning --- p.50 / Chapter 5.4 --- Examples --- p.51 / Chapter 5.4.1 --- Defensing BASIC SELECT Injection --- p.52 / Chapter 5.4.2 --- Defensing Advanced SELECT Injection --- p.52 / Chapter 5.4.3 --- Defensing UPDATE Injection --- p.57 / Chapter 5.5 --- Comparison --- p.59 / Chapter 6 --- Conclusion --- p.62 / Chapter A --- Commonly used table and column names --- p.64 / Chapter A.1 --- Commonly used table names for system management --- p.64 / Chapter A.2 --- Commonly used column names for password storage --- p.65 / Chapter A.3 --- Commonly used column names for username storage --- p.66 / Bibliography --- p.67
22

Dynamic Application Level Security Sensors

Rathgeb, Christopher Thomas 01 May 2010 (has links)
The battle for cyber supremacy is a cat and mouse game: evolving threats from internal and external sources make it difficult to protect critical systems. With the diverse and high risk nature of these threats, there is a need for robust techniques that can quickly adapt and address this evolution. Existing tools such as Splunk, Snort, and Bro help IT administrators defend their networks by actively parsing through network traffic or system log data. These tools have been thoroughly developed and have proven to be a formidable defense against many cyberattacks. However, they are vulnerable to zero-day attacks, slow attacks, and attacks that originate from within. Should an attacker or some form of malware make it through these barriers and onto a system, the next layer of defense lies on the host. Host level defenses include system integrity verifiers, virus scanners, and event log parsers. Many of these tools work by seeking specific attack signatures or looking for anomalous events. The defenses at the network and host level are similar in nature. First, sensors collect data from the security domain. Second, the data is processed, and third, a response is crafted based on the processing. The application level security domain lacks this three step process. Application level defenses focus on secure coding practices and vulnerability patching, which is ineffective. The work presented in this thesis uses a technique that is commonly employed by malware, dynamic-link library (DLL) injection, to develop dynamic application level security sensors that can extract fine-grain data at runtime. This data can then be processed to provide stronger application level defense by shrinking the vulnerability window. Chapters 5 and 6 give proof of concept sensors and describe the process of developing the sensors in detail.
23

Dynamic Application Level Security Sensors

Rathgeb, Christopher Thomas 01 May 2010 (has links)
The battle for cyber supremacy is a cat and mouse game: evolving threats from internal and external sources make it difficult to protect critical systems. With the diverse and high risk nature of these threats, there is a need for robust techniques that can quickly adapt and address this evolution. Existing tools such as Splunk, Snort, and Bro help IT administrators defend their networks by actively parsing through network traffic or system log data. These tools have been thoroughly developed and have proven to be a formidable defense against many cyberattacks. However, they are vulnerable to zero-day attacks, slow attacks, and attacks that originate from within. Should an attacker or some form of malware make it through these barriers and onto a system, the next layer of defense lies on the host. Host level defenses include system integrity verifiers, virus scanners, and event log parsers. Many of these tools work by seeking specific attack signatures or looking for anomalous events. The defenses at the network and host level are similar in nature. First, sensors collect data from the security domain. Second, the data is processed, and third, a response is crafted based on the processing. The application level security domain lacks this three step process. Application level defenses focus on secure coding practices and vulnerability patching, which is ineffective. The work presented in this thesis uses a technique that is commonly employed by malware, dynamic-link library (DLL) injection, to develop dynamic application level security sensors that can extract fine-grain data at runtime. This data can then be processed to provide stronger application level defense by shrinking the vulnerability window. Chapters 5 and 6 give proof of concept sensors and describe the process of developing the sensors in detail.
24

USING COMPLEXITY, COUPLING, AND COHESION METRICS AS EARLY INDICATORS OF VULNERABILITIES

Chowdhury, Istehad 28 September 2009 (has links)
Software security failures are common and the problem is growing. A vulnerability is a weakness in the software that, when exploited, causes a security failure. It is difficult to detect vulnerabilities until they manifest themselves as security failures in the operational stage of the software, because security concerns are often not addressed or known sufficiently early during the Software Development Life Cycle (SDLC). Complexity, coupling, and cohesion (CCC) related software metrics can be measured during the early phases of software development such as design or coding. Although these metrics have been successfully employed to indicate software faults in general, the relationships between CCC metrics and vulnerabilities have not been extensively investigated yet. If empirical relationships can be discovered between CCC metrics and vulnerabilities, these metrics could aid software developers to take proactive actions against potential vulnerabilities in software. In this thesis, we investigate whether CCC metrics can be utilized as early indicators of software vulnerabilities. We conduct an extensive case study on several releases of Mozilla Firefox to provide empirical evidence on how vulnerabilities are related to complexity, coupling, and cohesion. We mine the vulnerability databases, bug databases, and version archives of Mozilla Firefox to map vulnerabilities to software entities. It is found that some of the CCC metrics are correlated to vulnerabilities at a statistically significant level. Since different metrics are available at different development phases, we further examine the correlations to determine which level (design or code) of CCC metrics are better indicators of vulnerabilities. We also observe that the correlation patterns are stable across multiple releases. These observations imply that the metrics can be dependably used as early indicators of vulnerabilities in software. We then present a framework to automatically predict vulnerabilities based on CCC metrics. To build vulnerability predictors, we consider four alternative data mining and statistical techniques – C4.5 Decision Tree, Random Forests, Logistic Regression, and Naïve-Bayes – and compare their prediction performances. We are able to predict majority of the vulnerability-prone files in Mozilla Firefox, with tolerable false positive rates. Moreover, the predictors built from the past releases can reliably predict the likelihood of having vulnerabilities in future releases. The experimental results indicate that structural information from the non-security realm such as complexity, coupling, and cohesion are useful in vulnerability prediction. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-09-24 17:31:36.581
25

DESERVE: A FRAMEWORK FOR DETECTING PROGRAM SECURITY VULNERABILITY EXPLOITATIONS

MOHOSINA, AMATUL 20 September 2011 (has links)
It is difficult to develop a program that is completely free from vulnerabilities. Despite the applications of many approaches to secure programs, vulnerability exploitations occur in real world in large numbers. Exploitations of vulnerabilities may corrupt memory spaces and program states, lead to denial of services and authorization bypassing, provide attackers the access to authorization information, and leak sensitive information. Monitoring at the program code level can be a way of vulnerability exploitation detection at runtime. In this work, we propose a monitor embedding framework DESERVE (a framework for DEtecting program SEcuRity Vulnerability Exploitations). DESERVE identifies exploitable statements from source code based on static backward slicing and embeds necessary code to detect attacks. During the deployment stage, the enhanced programs execute exploitable statements in a separate test environment. Unlike traditional monitors that extract and store program state information to compare with vulnerable free program states to detect exploitation, our approach does not need to save state information. Moreover, the slicing technique allows us to avoid the tracking of fine grained level of information about runtime program environments such as input flow and memory state. We implement DESERVE for detecting buffer overflow, SQL injection, and cross-site scripting attacks. We evaluate our approach for real world programs implemented in C and PHP languages. The results show that the approach can detect some of the well-known attacks. Moreover, the approach imposes negligible runtime overhead. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-09-19 19:04:28.423
26

A Framework for Security Requirements Elicitation

Islam, Gibrail, Qureshi, Murtaza Ali January 2012 (has links)
Context: Security considerations are typically incorporated in the later stages of development as an afterthought. Security in software system is put under the category of non-functional requirements by the researchers. Understanding the security needs of a system requires considerable knowledge of assets, data security, integrity, confidentiality and availability of services. Counter measures against software attacks are also a security need of a software system. To incorporate security in the earliest stages, i.e. requirement gathering, helps building secure software systems from the start. For that purpose researchers have proposed different requirements elicitation techniques. These techniques are categorized into formal and informal techniques on the basis of finiteness and clarity in activities of the techniques. Objectives: Limitations of formal methods and lack of systematic approaches in informal elicitation techniques make it difficult to rely on a single technique for security requirements elicitation. Therefore we decided to utilize the strengths of formal and informal technique to mitigate their weaknesses by combining widely used formal and informal security requirements elicitation techniques. The basic idea of our research was to integrate an informal technique with a formal technique and propose a flexible framework with some level of formality in the steps. Methods: We conducted a systematic literature review to see “which are the widely used security requirement elicitation techniques?” as a pre-study for our thesis? We searched online databases i.e. ISI, IEEE Xplore, ACM, Springer, Inspec and compendeX. We also conducted a literature review for different frameworks that are used in industry, for security requirement elicitation. We conducted an experiment after proposing a security requirements elicitation Framework and compared the result from the Framework with that of CLASP and Misuse cases. Results:Two types of analysis were conducted on results from the experiment: Vulnerability analysis and Requirements analysis with respect to a security baseline. Vulnerability analysis shows that the proposed framework mitigates more vulnerabilities than CLASP and Misuse Cases. Requirements analysis with respect to the security baseline shows that the proposed framework, unlike CLASP and Misuse cases, covers all the security baseline features. Conclusions:The framework we have proposed by combining CLASP, Misuse cases and Secure TROPOS contains the strengths of three security requirements elicitation techniques. To make the proposed framework even more effective, we also included the security requirements categorization by Bogale and Ahmed [11]. The framework is flexible and contains fifteen steps to elicit security requirements. In addition it also allows iterations to improve security in a system
27

A Generic Approach for Protecting Java Card™ Smart Card Against Software Attacks / Une approche générique pour protéger les cartes à puce Java Card ™ contre les attaques logicielles

Bouffard, Guillaume 10 October 2014 (has links)
De nos jours, la carte à puce est la pierre angulaire de nos usages quotidiens. En effet, elle est indispensable pour retirer de l'argent, voyager, téléphoner, ... Pour améliorer la sécurité tout en bénéficiant d'un environnement de développement facilité, la technologie Java a été adaptée pour être embarquée dans les cartes à puce. Présentée durant le milieu des années 90, cette technologie est devenue la plate-forme principale d'exécution d'applications sécurisées. De part leurs usages, ces applications contiennent des informations sensibles pouvant intéresser des personnes mal intentionnées.Dans le monde de la carte à puce, les concepteurs d'attaques et de contre-mesures se livrent une guerre sans fin. Afin d'avoir une vue générique de toutes les attaques possibles, nous proposons d'utiliser les arbres de fautes. Cette approche, inspirée de l'analyse de sûreté, aide à comprendre et à implémenter tous les événements désirables et non désirables existants. Nous appliquons cette méthode pour l'analyse de vulnérabilité Java Card. Pour cela, nous définissons des propriétés qui devront être garanties: l'intégrité et la confidentialité des données et du code contenus dans la carte à puce. Dans cette thèse, nous nous sommes focalisés sur l'intégrité du code des applications. En effet, une perturbation de cet élément peut corrompre les autres propriétés. En modélisant les conditions, nous avons découvert de nouveaux chemins d'attaques permettant d'accéder au contenu de la carte. Pour empêcher ces nouvelles attaques, nous présentons de nouvelles contre-mesures pour prévenir les éléments indésirables définis dans les arbres de fautes. / Smart cards are the keystone of various applications which we daily use: pay money for travel, phone, etc. To improve the security of this device with a friendly development environment, the Java technology has been designed to be embedded in a smart card. Introduce in the mid-nineties, this technology becomes nowadays the leading application platform in the world. As a smart card embeds critical information, evil-minded people are interested to attack this device. In smart card domain, attacks and countermeasures are advancing at a fast rate. In order to have a generic view of all the attacks, we propose to use the Fault Tree Analysis. This method used in safety analysis helps to understand and implement all the desirable and undesirable events existing in this domain. We apply this method to Java Card vulnerability analysis. We define the properties that must be ensured: integrity and confidentiality of smart card data and code. During this thesis, we focused on the integrity property, especially on the code integrity. Indeed, a perturbation on this element can break each other properties. By modelling the conditions, we discovered new attack paths to get access to the smart card contents. We introduce new countermeasures to mitigate the undesirable events defined in the tree models.
28

Integrating secure programming concepts in introductory programming courses

Jama, Fartun January 2020 (has links)
The number of vulnerable systems with exploitable security defects has increased. This led to an increase in the demand for secure software systems. Software developers lack security experiences to design and build secure software, some even believe security is not their responsibility. Despite the increased need for teaching security and secure programming, security is not well integrated into the undergraduate computing curriculum and is only offered as part of a program or as an elective course. The aim of this project is to outline the importance of incorporating security and secure programming concepts in programming courses starting from the introductory courses. By evaluating the students' security consideration and knowledge regarding software security. As a result, based on the knowledge students lack regarding software security, security and secure programming concepts are identified which need to be integrated into the programming courses.
29

Methods and Tools for Practical Software Testing and Maintenance

Saieva, Anthony January 2024 (has links)
As software continues to envelop traditional industries the need for increased attention to cybersecurity is higher than ever. Software security helps protect businesses and governments from financial losses due to cyberattacks and data breaches, as well as reputational damage. In theory, securing software is relatively straightforward—it involves following certain best practices and guidelines to ensure that the software is secure. In practice, however, software security is often much more complicated. It requires a deep understanding of the underlying system and code (including potentially legacy code), as well as a comprehensive understanding of the threats and vulnerabilities that could be present. Additionally, software security also involves the implementation of strategies to protect against those threats and vulnerabilities, which may involve a combination of technologies, processes, and procedures. In fact many real cyber attacks are caused not from zero day vulnerabilities but from known issues that haven't been addressed so real software security also requires ongoing monitoring and maintenance to ensure critical systems remain secure. This thesis presents a series of novel techniques that together form an enhanced software maintenance methodology from initial bug reporting all the way through patch deployment. We begin by introducing Ad Hoc Test Generation, a novel testing technique that handles when a security vulnerability or other critical bugis not detected by the developers’ test suite, and is discovered post-deployment, developers must quickly devise a new test that reproduces the buggy behavior. Then the developers need to test whether their candidate patch indeed fixes the bug, without breaking other functionality, while racing to deploy before attackers pounce on exposed user installations. This work builds on record-replay and binary rewriting to automatically generate and run targeted tests for candidate patches significantly faster and more efficiently than traditional test suite generation techniques like symbolic execution. Our prototype of this concept is called ATTUNE. To construct patches in some instances developers maintaining software may be forced to deal directly with the binary since source code is no longer available. In these instances this work presents a transformer based model called DIRECT that provides semantics related names for variables and function names that have been lost giving developers the opportunity to work with a facsimile of the source code that would otherwise be unavailable. In the event developers need even more support deciphering the decompiled code we provide another tool called REINFOREST that allows developers to search for similar code which they can use to further understand the code in question and use as a reference when developing a patch. After patches have been written, deployment remains a challenge. In some instances deploying a patch for the buggy behavior may require supporting legacy systems where software cannot be upgraded without causing compatibility issues. To support these updates this work introduces the concept of binary patch decomposition which breaks a software release down into its component parts and allows software administrators to apply only the critical portions without breaking functionality. We present a novel software patching methodology that we can recreate bugs, develop patches, and deploy updates in the presence of the typical challenges that come when patching production software including deficient test suites, lack of source code, lack of documentation, compatibility issues, and the difficulties associated with patching binaries directly.
30

Predicting vulnerability for requirements: A data-driven approach

Imtiaz, Sayem Mohammad 09 August 2019 (has links)
Being software security one of the primary concerns in the software engineering community, researchers are coming up with many preemptive approaches which are primarily designed to detect vulnerabilities in the post-implementation stage of the software development life-cycle (SDLC). While they have been shown to be effective in detecting vulnerabilities, the consequences are often expensive. Accommodating changes after detecting a bug or vulnerability in late stages of the SDLC is costly. On that account, in this thesis, we propose a novel framework to provide an additional measure of predicting vulnerabilities at earlier stages of the SDLC. To that end, we leverage state-of-the-art machine learning classification algorithms to predict vulnerabilities for new requirements. We also present a case study on a large open-source-software (OSS) system, Firefox, evaluating the effectiveness of the extended prediction module. The results demonstrate that the framework could be a viable augmentation to the traditional vulnerabilityighting tools.

Page generated in 0.0429 seconds