• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 433
  • 94
  • 81
  • 59
  • 37
  • 36
  • 12
  • 8
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 972
  • 242
  • 179
  • 132
  • 110
  • 107
  • 102
  • 91
  • 87
  • 85
  • 78
  • 76
  • 76
  • 71
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Approximation-based monitoring of ongoing model extraction attacks : model similarity tracking to assess the progress of an adversary / Approximationsbaserad monitorering av pågående modelextraktionsattacker : modellikhetsövervakning för att uppskatta motståndarens framsteg

Gustavsson, Christian January 2024 (has links)
Many organizations turn to the promise of artificial intelligence and machine learning (ML) as its use gains traction in many disciplines. However, developing high-performing ML models is often expensive. The design work can be complicated. Collecting large training datasets is often costly and can contain sensitive or proprietary information. For many reasons, machine learning models make for an appetizing target to an adversary interested in stealing data, model properties, or model behavior.  This work explores model extraction attacks and aims at designing an approximation-based monitor for tracking the progress of a potential adversary. When triggered, action can be taken to address the threat. The proposed monitor utilizes the interaction with a targeted model, continuously training a monitor model as a proxy for what the attacker could achieve, given the data gathered from the target.  The usefulness of the proposed monitoring approach is shown for two experimental attack scenarios. One explores the use of parametric and Bayesian models for a regression case, while the other explores commonly used neural network architectures for image classification.  The experiments expand current monitoring research to include ridge regression, Gaussian process regression, and a set of standard variants of convolutional neural networks: ResNet, VGG, and DenseNet. It also explores model and dataset similarity using metrics from statistical analysis, linear algebra, optimal transport, and a rank score. / Många organisationer vänder sig till löftet om artificiell intelligens och maskininlärning (ML) då dess användning vinner mark inom allt fler discipliner. Att utveckla högpresterande ML-modeller är dock ofta kostsamt. Designarbetet kan vara komplicerat. Att samla in stora träningsdataset är ofta dyrt och kan innehålla känslig eller proprietär information. Det finns många skäl till att maskininlärningsmodeller kan vara lockande mål för en motståndare som är ute efter att stjäla data, modellparametrar eller modellbeteende. Det här arbetet utforskar modellextraktionsattacker och syftar till att utforma en approximationsbaserad monitorering som följer framstegen för en potentiell motståndare. När en attack är konstaterad kan åtgärder vidtas för att hantera hotet. Den föreslagna monitorn utnyttjar interaktionen med målmodellen. Den tränar kontinuerligt en monitor-modell som en fungerar som en approximation för vad angriparen skulle kunna uppnå med de data som samlats in från målmodellen. Nyttan av den föreslagna övervakningsansatsen visas för två experimentella attackscenarier. Det ena utforskar användningen av parametriska och Bayesianska modeller för ett regressionsfall, medan det andra utforskar vanligt använda neurala nätverksarkitekturer för ett bildklassificeringsfall. Experimenten utvidgar aktuell forskning kring monitorer till att att inkludera Ridge regression, Gauassian process regression och en uppsättning standardvarianter av convolutional neural networks: ResNet, VGG och DenseNet. Experimenten utforskar även likhet mellan ML-modeller och dataset med hjälp av mått från statistisk analys, linjär algebra, optimal transport samt rangapproximation.
492

SCA-Resistant and High-Performance Embedded Cryptography Using Instruction Set Extensions and Multi-Core Processors

Chen, Zhimin 28 July 2011 (has links)
Nowadays, we use embedded electronic devices in almost every aspect of our daily lives. They represent our electronic identity; they store private information; they monitor health status; they do confidential communications, and so on. All these applications rely on cryptography and, therefore, present us a research objective: how to implement cryptography on embedded systems in a trustworthy and efficient manner. Implementing embedded cryptography faces two challenges - constrained resources and physical attacks. Due to low cost constraints and power budget constraints, embedded devices are not able to use high-end processors. They cannot run at extremely high frequencies either. Since most embedded devices are portable and deployed in the field, attackers are able to get physical access and to mount attacks as they want. For example, the power dissipation, electromagnetic radiation, and execution time of embedded cryptography enable Side-Channel Attacks (SCAs), which can break cryptographic implementations in a very short time with a quite low cost. In this dissertation, we propose solutions to efficient implementation of SCA-resistant and high-performance cryptographic software on embedded systems. These solutions make use of two state-of-the-art architectures of embedded processors: instruction set extensions and multi-core architectures. We show that, with proper processor micro-architecture design and suitable software programming, we are able to deliver SCA-resistant software which performs well in security, performance, and cost. In comparison, related solutions have either high hardware cost or poor performance or low attack resistance. Therefore, our solutions are more practical and see a promising future in commercial products. Another contribution of our research is the proper partitioning of the Montgomery multiplication over multi-core processors. Our solution is scalable over multiple cores, achieving almost linear speedup with a high tolerance to inter-core communication delays. We expect our contributions to serve as solid building blocks that support secure and high-performance embedded systems. / Ph. D.
493

Towards Secure and Safe AI-enabled Systems Through Optimizations

Guanhong Tao (18542383) 15 May 2024 (has links)
<p dir="ltr">Artificial intelligence (AI) is increasingly integrated into critical systems across various sectors, including public surveillance, autonomous driving, and malware detection. Despite their impressive performance and promise, the security and safety of AI-enabled systems remain significant concerns. Like conventional systems that have software bugs or vulnerabilities, applications leveraging AI are also susceptible to such issues. Malicious behaviors can be intentionally injected into AI models by adversaries, creating a backdoor. These models operate normally with benign inputs but consistently misclassify samples containing an attacker-inserted trigger, known as a <i>backdoor attack</i>.</p><p dir="ltr">However, backdoors can not only be injected by an attacker but may also naturally exist in normally trained models. One can find backdoor triggers in benign models that cause any inputs with the trigger to be misclassified, a phenomenon termed <i>natural backdoors</i>. Regardless of whether they are injected or natural, backdoors can take various forms, which increases the difficulty of identifying such vulnerabilities. This challenge is exacerbated when access to AI models is limited.</p><p dir="ltr">This dissertation introduces an optimization-based technique that reverse-engineers trigger patterns exploited by backdoors, whether injected or natural. It formulates how backdoor triggers modify inputs down to the pixel level to approximate their potential forms. The intended changes in output predictions guide the reverse-engineering process, which involves computing the input gradient or sampling possible perturbations when model access is limited. Although various types of backdoors exist, this dissertation demonstrates that they can be effectively clustered into two categories based on their methods of input manipulation. The development of practical reverse-engineering approaches is based on this fundamental classification, leading to the successful identification of backdoor vulnerabilities in AI models.</p><p dir="ltr">To alleviate such security threats, this dissertation introduces a novel hardening technique that enhances the robustness of models against adversary exploitation. It sheds light on the existence of backdoors, which can often be attributed to the small distance between two classes. Based on this analysis, a class distance hardening method is proposed to proactively enlarge the distance between every pair of classes in a model. This method is effective in eliminating both injected and natural backdoors in a variety of forms.</p><p dir="ltr">This dissertation aims to highlight both existing and newly identified security and safety challenges in AI systems. It introduces novel formulations of backdoor trigger patterns and provides a fundamental understanding of backdoor vulnerabilities, paving the way for the development of safer and more secure AI systems.</p>
494

Secure Coding Practice in Java: Automatic Detection, Repair, and Vulnerability Demonstration

Zhang, Ying 12 October 2023 (has links)
The Java platform and third-party open-source libraries provide various Application Programming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely is challenging for developers who lack cybersecurity training. Prior studies show that many developers use APIs insecurely, thereby introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, their effectiveness in helping developers with secure coding practices remains unclear. This dissertation focuses on two main objectives: (1) exploring the strengths and weaknesses of the existing automated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. Our research started with investigating the effectiveness of current tools in helping with developers' secure coding practices. We systematically explored the strengths and weaknesses of existing automated tools for detecting API-related vulnerabilities. Through comprehensive analysis, we observed that most existing tools merely report misuses, without suggesting any customized fixes. Moreover, developers often rejected tool-generated vulnerability reports due to their concerns on the correctness of detection, and the exploitability of the reported issues. To address these limitations, the second work proposed SEADER, an example-based approach to detect and repair security-API misuses. Given an exemplar ⟨insecure, secure⟩ code pair, SEADER compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred information, given a program, SEADER performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes. The third work leverages ChatGPT-4.0 to automatically generate security test cases. These test cases can demonstrate how vulnerable API usage facilitates supply chain attacks on specific software applications. By running such test cases during software development and maintenance, developers can gain more relevant information about exposed vulnerabilities, and may better create secure-by-design and secure-by-default software. / Doctor of Philosophy / The Java platform and third-party open-source libraries provide various Application Pro- gramming Interfaces (APIs) to facilitate secure coding. However, using these APIs securely can be challenging, especially for developers who aren't trained in cybersecurity. Prior work shows that many developers use APIs insecurely, consequently introducing vulnerabilities in their software. Despite the availability of various tools designed to identify API insecure usage, it is still unclear how well they help developers with secure coding practices. This dissertation focuses on (1) exploring the strengths and weaknesses of the existing au- tomated detection tools for API-related vulnerabilities, and (2) creating better tools that detect, repair, and demonstrate these vulnerabilities. We first systematically evaluated the strengths and weaknesses of the existing automated API-related vulnerability detection tools. We observed that most existing tools merely report misuses, without suggesting any cus- tomized fixes. Additionally, developers often reject tool-generated vulnerability reports due to their concerns about the correctness of detection, and whether the reported vulnerabil- ities are truly exploitable. To address the limitations found in our study, the second work proposed a novel example-based approach, SEADER, to detect and repair API insecure usage. The third work leverages ChatGPT-4.0 to automatically generate security test cases, and to demonstrate how vulnerable API usage facilitates the supply chain attacks to given software applications.
495

Confused by Path: Analysis of Path Confusion Based Attacks

Mirheidari, Seyed Ali 12 November 2020 (has links)
URL parser and normalization processes are common and important operations in different web frameworks and technologies. In recent years, security researchers have targeted these processes and discovered high impact vulnerabilities and exploitation techniques. In a different approach, we will focus on semantic disconnect among different framework-independent web technologies (e.g., browsers, proxies, cache servers, web servers) which results in different URL interpretations. We coined the term “Path Confusion” to represent this disagreement and this thesis will focus on analyzing enabling factors and security impact of this problem.In this thesis, we will show the impact and importance of path confusion in two attack classes including Style Injection by Relative Path Overwrite (RPO) and Web Cache Deception (WCD). We will focus on these attacks as case studies to demonstrate how utilizing path confusion techniques makes targeted sites exploitable. Moreover, we propose novel variations of each attack which would expand the number of vulnerable sites and introduce new attack scenarios. We will present instances which have been secured against these attacks, while being still exploitable with introduced Path Confusion techniques. To further elucidate the seriousness of path confusion, we will also present the large scale analysis results of RPO and WCD attacks on high profile sites. We present repeatable methodologies and automated path confusion crawlers which detect thousands of sites that are still vulnerable to RPO or WCD only with specific types of path confusion techniques. Our results attest the severity of path confusion based class of attacks and how extensively they could hit the clients or systems. We analyze some browser-based mitigation techniques for RPO and discuss that WCD cannot be dealt as a common vulnerability of each component; instead it arises when an ecosystem of individually impeccable components ends up in a faulty situation.
496

Unsupervised Learning for Feature Selection: A Proposed Solution for Botnet Detection in 5G Networks

Lefoane, Moemedi, Ghafir, Ibrahim, Kabir, Sohag, Awan, Irfan U. 01 August 2022 (has links)
Yes / The world has seen exponential growth in deploying Internet of Things (IoT) devices. In recent years, connected IoT devices have surpassed the number of connected non-IoT devices. The number of IoT devices continues to grow and they are becoming a critical component of the national infrastructure. IoT devices' characteristics and inherent limitations make them attractive targets for hackers and cyber criminals. Botnet attack is one of the serious threats on the Internet today. This article proposes pattern-based feature selection methods as part of a machine learning (ML) based botnet detection system. Specifically, two methods are proposed: the first is based on the most dominant pattern feature values and the second is based on Maximal Frequent Itemset (MFI) mining. The proposed feature selection method uses Gini Impurity (GI) and an unsupervised clustering method to select the most influential features automatically. The evaluation results show that the proposed methods have improved the performance of the detection system. The developed system has a True Positive Rate (TPR) of 100% and a False Positive Rate (FPR) of 0% for best performing models. In addition, the proposed methods reduce the computational cost of the system as evidenced by the detection speed of the system.
497

Latent Dirichlet Allocation for the Detection of Multi-Stage Attacks

Lefoane, Moemedi, Ghafir, Ibrahim, Kabir, Sohag, Awan, Irfan U. 19 December 2023 (has links)
No / The rapid shift and increase in remote access to organisation resources have led to a significant increase in the number of attack vectors and attack surfaces, which in turn has motivated the development of newer and more sophisticated cyber-attacks. Such attacks include Multi-Stage Attacks (MSAs). In MSAs, the attack is executed through several stages. Classifying malicious traffic into stages to get more information about the attack life-cycle becomes a challenge. This paper proposes a malicious traffic clustering approach based on Latent Dirichlet Allocation (LDA). LDA is a topic modelling approach used in natural language processing to address similar problems. The proposed approach is unsupervised learning and therefore will be beneficial in scenarios where traffic data is not labeled and analysis needs to be performed. The proposed approach uncovers intrinsic contexts that relate to different categories of attack stages in MSAs. These are vital insights needed across different areas of cybersecurity teams like Incident Response (IR) within the Security Operations Center (SOC), the insights uncovered could have a positive impact in ensuring that attacks are detected at early stages in MSAs. Besides, for IR, these insights help to understand the attack behavioural patterns and lead to reduced time in recovery following an incident. The proposed approach is evaluated on a publicly available MSAs dataset. The performance results are promising as evidenced by over 99% accuracy in identified malicious traffic clusters.
498

Latent Semantic Analysis and Graph Theory for Alert Correlation: A Proposed Approach for IoT Botnet Detection

Lefoane, Moemedi, Ghafir, Ibrahim, Kabir, Sohag, Awan, Irfan, El Hindi, K., Mahendran, A. 16 July 2024 (has links)
Yes / In recent times, the proliferation of Internet of Things (IoT) technology has brought a significant shift in the digital transformation of various industries. The enabling technologies have accelerated this adoption. The possibilities unlocked by IoT have been unprecedented, leading to the emergence of smart applications that have been integrated into national infrastructure. However, the popularity of IoT technology has also attracted the attention of adversaries, who have leveraged the inherent limitations of IoT devices to launch sophisticated attacks, including Multi-Stage attacks (MSAs) such as IoT botnet attacks. These attacks have caused significant losses in revenue across industries, amounting to billions of dollars. To address this challenge, this paper proposes a system for IoT botnet detection that comprises two phases. The first phase aims to identify IoT botnet traffic, the input to this phase is the IoT traffic, which is subjected to feature selection and classification model training to distinguish malicious traffic from normal traffic. The second phase analyses the malicious traffic from stage one to identify different botnet attack campaigns. The second stage employs an alert correlation approach that combines the Latent Semantic Analysis (LSA) unsupervised learning and graph theory based techniques. The proposed system was evaluated using a publicly available real IoT traffic dataset and yielded promising results, with a True Positive Rate (TPR) of over 99% and a False Positive Rate (FPR) of 0%. / Researchers Supporting Project, King Saud University, Riyadh, Saudi Arabia, under Grant RSPD2024R953
499

Multi-Cloud architecture attacks through Application Programming Interfaces

Lander, Theodore Edward, Jr. 10 May 2024 (has links) (PDF)
Multi-cloud applications are becoming a universal way for organizations to build and deploy systems. Multi-cloud systems are deployed across several different service providers, whether this is due to company mergers, budget concerns, or services provided with each provider. With the growing concerns of potential cyber attacks, security of multi-cloud is an important subject, especially within the communications between systems through Application Programming Interfaces (APIs). This thesis presents an in depth analysis of multi-cloud, looking at APIs and security, creates a mock architecture for a multi-cloud system, and executes a cyber attack on this architecture to demonstrate the catastrophic effects that could come of these systems if left unprotected. Finally, some solutions for security are discussed as well as the potential plan for more testing of cyber attacks in this realm
500

Detecting DDoS Attacks with Machine Learning : A Comparison between PCA and an autoencoder / Att Upptäcka DDoS-attacker med Maskininlärning : En Jämförelse mellan PCA och en autoencoder

Johansson, Sofie January 2024 (has links)
Distibuted denial of service (DDoS) attacks are getting more and more common in society as the number of devices connected to the Internet is increasing. To reduce the impact of such attacks it is important to detect them as soon as possible. Many papers have investigated how well different machine learning algorithms can detect DDoS attacks. However, most papers are focusing on supervised learning algorithms which require a lot of labeled data, which is hard to find. This thesis compares two unsupervised learning algorithms, an autoencoder and principal component analysis (PCA), in how well they detect DDoS attacks. The models are implemented in the Python libraries Keras, Tensorflow and scikit-learn. They are then trained and tested with data that has its origin in the CICDDOS2019 dataset. There are normal data and nine different types of DDoS attacks in the used dataset. The models are compared by computing the Receiver Operating Characteristic (ROC) curve and its Area Under the Curve (AUC) score, and the F1 score of the models. For both measures the mean value of the results of all attack types are used. The computations show that the autoencoder perform better than PCA with respect to both the mean AUC score (0.981 compared to 0.967) and the mean F1 score (0.987 compared to 0.978). The thesis goes on to discussing why the autoencoder performs better than PCA and, finally draws conclusions based on the insights of the analysis.

Page generated in 0.384 seconds