• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 14
  • 12
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 194
  • 194
  • 101
  • 54
  • 38
  • 37
  • 36
  • 31
  • 30
  • 30
  • 28
  • 20
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Using Markov Decision Processes and Reinforcement Learning to Guide Penetration Testers in the Search for Web Vulnerabilities / Användandet av Markov Beslutsprocesser och Förstärkt Inlärning för att Guida Penetrationstestare i Sökandet efter Sårbarheter i Webbapplikationer

Pettersson, Anders, Fjordefalk, Ossian January 2019 (has links)
Bug bounties are an increasingly popular way of performing penetration tests of web applications. User statistics of bug bounty platforms show that a lot of hackers struggle to find bugs. This report explores a way of using Markov decision processes and reinforcement learning to help hackers find vulnerabilities in web applications by building a tool that suggests attack surfaces to examine and vulnerability reports to read to get the relevant knowledge. The attack surfaces, vulnerabilities and reports are all derived from a taxonomy of web vulnerabilities created in a collaborating project. A Markov decision process (MDP) was defined, this MDP includes the environment, different states of knowledge and actions that can take a user from one state of knowledge to another. To be able to suggest the best possible next action to perform, the MDP uses a policy that describes the value of entering each state. Each state is given a value that is called Q-value. This value indicates how close that state is to another state where a vulnerability has been found. This means that a state has a high Q-value if the knowledge gives a user a high probability of finding a vulnerability and vice versa. This policy was created using a reinforcement learning algorithm called Q-learning. The tool was implemented as a web application using Java Spring Boot and ReactJS. The resulting tool is best suited for new hackers in the learning process. The current version is trained on the indexed reports of the vulnerability taxonomy but future versions should be trained on user behaviour collected from the tool. / Bug bounties är ett alltmer populärt sätt att utföra penetrationstester av webbapplikationer. Användarstatistik från bug bounty-plattformar visar att många hackare har svårt att hitta buggar. Denna rapport undersöker ett sätt att använda Markov-beslutsprocesser och förstärkt inlärning för att hjälpa hackare att hitta sårbarheter i webbapplikationer genom att bygga ett verktyg som föreslår attackytor att undersöka och sårbarhetsrapporter att läsa för att tillgodogöra sig rätt kunskaper. Attackytor, sårbarheter och rapporter är alla hämtade från en taxonomi över webbsårbarheter skapad i ett samarbetande projekt. En Markovbeslutsprocess (MDP) definierades. Denna MDP inkluderar miljön, olika kunskapstillstånd och handlingar som kan ta användaren från ett kunskapstillstånd till ett annat. För kunna föreslå nästa handling på bästa möjliga sätt använder MDPn en policy som beskriver värdet av att träda in i alla de olika tillstånden. Alla tillstånd ges ett värde som kallas Q-värde. Detta värde indikerar hur nära ett tillstånd har till ett annat tillstånd där en sårbarhet har hittats. Detta betyder att ett tillstånd har ett högt Q-värde om kunskapen ger användaren en hög sannolikhet att hitta en sårbarhet och vice versa. Policyn skapades med hjälp av en typ av förstärkt inlärningsalgoritm kallad Q-inlärning. Verktyget implementerades som en webbapplikation med hjälp av Java Spring Boot och ReactJS. Det resulterande verktyget är bäst lämpat för nya hackare i inlärningsstadiet. Den nuvarande versionen är tränad på indexerade rapporter från sårbarhetstaxonomin men framtida versioner bör tränas på användarbeteende insamlat från verktyget.
142

Innovative derivative pricing and time series simulation techniques via machine and deep learning

Fu, Weilong January 2022 (has links)
There is a growing number of applications of machine learning and deep learning in quantitative and computational finance. In this thesis, we focus on two of them. In the first application, we employ machine learning and deep learning in derivative pricing. The models considering jumps or stochastic volatility are more complicated than the Black-Merton-Scholes model and the derivatives under these models are harder to be priced. The traditional pricing methods are computationally intensive, so machine learning and deep learning are employed for fast pricing. I n Chapter 2, we propose a method for pricing American options under the variance gamma model. We develop a new fast and accurate approximation method inspired by the quadratic approximation to get rid of the time steps required in finite difference and simulation methods, while reducing the error by making use of a machine learning technique on pre-calculated quantities. We compare the performance of our method with those of the existing methods and show that this method is efficient and accurate for practical use. In Chapters 3 and 4, we propose unsupervised deep learning methods for option pricing under Lévy process and stochastic volatility respectively, with a special focus on barrier options in Chapter 4. The unsupervised deep learning approach employs a neural network as the candidate option surface and trains the neural network to satisfy certain equations. By matching the equation and the boundary conditions, the neural network would yield an accurate solution. Special structures called singular terms are added to the neural networks to deal with the non-smooth and discontinuous payoff at the strike and barrier levels so that the neural networks can replicate the asymptotic behaviors of options at short maturities. Unlike supervised learning, this approach does not require any labels. Once trained, the neural network solution yields fast and accurate option values. The second application focuses on financial time series simulation utilizing deep learning techniques. Simulation extends the limited real data for training and evaluation of trading strategies. It is challenging because of the complex statistical properties of the real financial data. In Chapter 5, we introduce two generative adversarial networks, which utilize the convolutional networks with attention and the transformers, for financial time series simulation. The networks learn the statistical properties in a data-driven manner and the attention mechanism helps to replicate the long-range dependencies. The proposed models are tested on the S&P 500 index and its option data, examined by scores based on the stylized facts and are compared with the pure convolutional network, i.e. QuantGAN. The attention-based networks not only reproduce the stylized facts, including heavy tails, autocorrelation and cross-correlation, but also smooth the autocorrelation of returns.
143

Computational Inversion with Wasserstein Distances and Neural Network Induced Loss Functions

Ding, Wen January 2022 (has links)
This thesis presents a systematic computational investigation of loss functions in solving inverse problems of partial differential equations. The primary efforts are spent on understanding optimization-based computational inversion with loss functions defined with the Wasserstein metrics and with deep learning models. The scientific contributions of the thesis can be summarized in two directions. In the first part of this thesis, we investigate the general impacts of different Wasserstein metrics and the properties of the approximate solutions to inverse problems obtained by minimizing loss functions based on such metrics. We contrast the results to those of classical computational inversion with loss functions based on the 𝐿² and 𝐻⁻ metric. We identify critical parameters, both in the metrics and the inverse problems to be solved, that control the performance of the reconstruction algorithms. We highlight the frequency disparity in the reconstructions with the Wasserstein metrics as well as its consequences, for instance, the pre-conditioning effect, the robustness against high-frequency noise, and the loss of resolution when data used contain random noise. We examine the impact of mass unbalance and conduct a comparative study on the differences and important factors of various unbalanced Wasserstein metrics. In the second part of the thesis, we propose loss functions formed on a novel offline-online computational strategy for coupling classical least-square computational inversion with modern deep learning approaches for full waveform inversion (FWI) to achieve advantages that can not be achieved with only one component. In a nutshell, we develop an offline learning strategy to construct a robust approximation to the inverse operator and utilize it to produce a viable initial guess and design a new loss function for the online inversion with a new dataset. We demonstrate through both theoretical analysis and numerical simulations that our neural network induced loss functions developed by the coupling strategy improve the loss landscape as well as computational efficiency of FWI with reliable offline training on moderate computational resources in terms of both the size of the training dataset and the computational cost needed.
144

Learning to Edit Code : Towards Building General Purpose Models for Source Code Editing

Chakraborty, Saikat January 2022 (has links)
The way software developers edit code day-to-day tends to be repetitive, often using existing code elements. Many researchers have tried to automate the repetitive code editing process by mining specific change templates. However, such templates are often manually implemented for automated applications. Consequently, such template-based automated code editing is very tedious to implement. In addition, template-based code editing is often narrowly-scoped and low noise tolerant. Machine Learning, specially deep learning-based techniques, could help us solve these problems because of their generalization and noise tolerance capacities. The advancement of deep neural networks and the availability of vast open-source evolutionary data opens up the possibility of automatically learning those templates from the wild and applying those in the appropriate context. However, deep neural network-based modeling for code changes, and code, in general, introduces some specific problems that need specific attention from the research community. For instance, source code exhibit strictly defined syntax and semantics inherited from the properties of Programming Language (PL). In addition, source code vocabulary (possible number of tokens) can be arbitrarily large. This dissertation formulates the problem of automated code editing as a multi-modal translation problem, where, given a piece of code, the context, and some guidance, the objective is to generate edited code. In particular, we divide the problem into two sub-problems — source code understanding and generation. We empirically show that the deep neural networks (models in general) for these problems should be aware of the PL-properties (i.e., syntax, semantics). This dissertation investigates two primary directions of endowing the models with knowledge about PL-properties — (i) explicit encoding: where we design models catering to a specific property, and (ii) implicit encoding: where we train a very-large model to learn these properties from very large corpus of source code in unsupervised ways. With implicit encoding, we custom design the model to cater to the need for that property. As an example of such models, we developed CODIT — a tree-based neural model for syntactic correctness. We design CODIT based on the Context Free Grammar of the programming language. Instead of generating source code, CODIT first generates the tree structure by sampling the production rule from the CFG. Such a mechanism prohibits infeasible production rule selection. In the later stage, CODIT generates the edited code conditioned on the tree generated earlier. Suchconditioning makes the edited code syntactically correct. CODIT showed promise in learning code edit patterns in the wild and effectiveness in automatic program repair. In another empirical study, we showed that a graph-based model is better suitable for source code understanding tasks such as vulnerability detection. On the other hand, with implicit encoding, we use a very large (with several hundred million parameters) yet generic model. However, we pre-train these models on a super-large (usually hundreds of gigabytes) collection of source code and code metadata. We empirically show that if sufficiently pre-trained, such models are capable enough to learn PL properties such as syntax and semantics. In this dissertation, we developed two such pre-trained models, with two different learning objectives. First, we developed PLBART— the first-ever pre-trained encoder-decoder-based model for source code and show that such pre-train enables the model to generate syntactically and semantically correct code. Further, we show an in-depth empirical study on using PLBART in automated code editing. Finally, we develop another pre-trained model — NatGen to encode the natural coding convention followed by developers into the model. To design NatGen, we first deliberately modify the code from the developers’ written version preserving the original semantics. We call such transformations ‘de-naturalizing’ transformations. Following the previous studies on induced unnaturalness in code, we defined several such ‘de-naturalizing’ transformations and applied those to developer-written code. We pre-train NatGen to reverse the effect of these transformations. That way, NatGen learns to generate code similar to the developers’ written by undoing any unnaturalness induced by our forceful ‘de-naturalizing‘ transformations. NatGen has performed well in code editing and other source code generation tasks. The models and empirical studies we performed while writing this dissertation go beyond the scope of automated code editing and are applicable to other software engineering automation problems such as Code translation, Code summarization, Code generation, Vulnerability detection,Clone detection, etc. Thus, we believe this dissertation will influence and contribute to the advancement of AI4SE and PLP.
145

Deep Networks Through the Lens of Low-Dimensional Structure: Towards Mathematical and Computational Principles for Nonlinear Data

Buchanan, Sam January 2022 (has links)
Across scientific and engineering disciplines, the algorithmic pipeline forprocessing and understanding data increasingly revolves around deep learning, a data-driven approach to learning features for tasks that uses high-capacity compositionally-structured models, large datasets, and scalable gradient-based optimization. At the same time, modern deep learning models are resource-inefficient, require up to trillions of trainable parameters to succeed on tasks, and their predictions are notoriously susceptible to perceptually-indistinguishable changes to the input, limiting their use in applications where reliability and safety are critical. Fortunately, data in scientific and engineering applications are not generic, but structured---they possess low-dimensional nonlinear structure that enables statistical learning in spite of their inherent high-dimensionality---and studying the interactions between deep learning models, training algorithms, and structured data represents a promising approach to understand practical issues such as resource efficiency, robustness and invariance in deep learning. To begin to realize this program, it is necessary to have mathematical model problems that capture the nonlinear structures of data in deep learning applications and features of practical deep learning pipelines, and there is a question of how to translate mathematical insights into practical progress on the aforementioned issues, as well. We address these considerations in this thesis. First, we pose and study the multiple manifold problem, a binary classification task modeled on applications in computer vision, in which a deep fully-connected neural network is trained to separate two low-dimensional submanifolds of the unit sphere. We provide an analysis of the one-dimensional case, proving for a rather general family of configurations that when the network depth is large relative to certain geometric and statistical properties of the data, the network width grows as a sufficiently large polynomial in the depth, and the number of samples from the manifolds is polynomial in the depth, randomly-initialized gradient descent rapidly learns to classify the two manifolds perfectly with high probability. Our analysis demonstrates concrete benefits of depth and width in the context of a practically-motivated model problem: the depth acts as a fitting resource, with larger depths corresponding to smoother networks that can more readily separate the class manifolds, and the width acts as a statistical resource, enabling concentration of the randomly-initialized network and its gradients. Next, we turn our attention to the design of specific network architectures for achieving invariance to nuisance transformations in vision systems. Existing approaches to invariance scale exponentially with the dimension of the family of transformations, making them unable to cope with natural variabilities in visual data such as changes in pose and perspective. We identify a common limitation of these approaches---they rely on sampling to traverse the high-dimensional space of transformations---and propose a new computational primitive for building invariant networks based instead on optimization, which in many scenarios provides a provably more efficient method for high-dimensional exploration than sampling. We provide empirical and theoretical corroboration of the efficiency gains and soundness of our proposed method, and demonstrate its utility in constructing an efficient invariant network for a simple hierarchical object detection task when combined with unrolled optimization. Together, the results in this thesis establish the first end-to-end theoretical guarantees for training deep neural networks with data with nonlinear low-dimensional structure, and provide a methodology to translate these insights into the design of practical neural network architectures with efficiency and invariance benefits.
146

Anomalous Behavior Detection in Aircraft based Automatic Dependent Surveillance–Broadcast (ADS-B) system using Deep Graph Convolution and Generative model (GA-GAN)

Kenaudekar, Jayesh January 2022 (has links)
The Automatic Dependent Surveillance-Broadcast (ADS-B) is a key component of the Next Generation Air Transportation System (Next Gen) that manages the increasingly congested airspace and operation. From Jan 2020, the U.S. Federal Aviation Administration (FAA) mandated the use of (ADS-B) as a key component of Next Gen project. ADS-Bprovides accurate aircraft localization via satellite navigation and efficient air traffic management, and also improves the safety of thousands of passengers travelling worldwide. While the benefits of ADS-B are well known, the fact that ADS-B is an open protocol introduces various exploitable security vulnerabilities. One practical threat is the ADS-B spoofing attack that targets the ground station, in which the ground-based attacker manipulates the International Civil Aviation Organization (ICAO) address (which is a unique identifierfor each aircraft) in the ADS-B forwarded messages to fake the appearance of non-existent aircraft or masquerade as a trusted aircraft. As a result, this type of attack can confuseand misguide the aircraft pilots or the air traffic control personnel and cause dangerous maneuvers. In this project, we intend to build a robust Intrusion Detection System (IDS) to detectanomalous behavior and classify attacks in an aircraft ADS-B protocol in real time duringair-ground communication. The IDS system we propose is a 3 stage deep learning framework built using Spatial Graph Convolution Networks and Deep auto-regressive generative model. In stage 1 we use a Graph convolution network architecture to classify the dataas attacked or normal in the entire airspace of an operating aircraft. In stage 2 we analyze the sequences of air-space states to identify anomalies using a generative Wavenet modeland simultaneously output feature under attack. Final stage consist of aircraft (ICAO) classification module based on unique RF transmitter signal characteristics of an aircraft. This allows the ground station operator to examine each incoming message based on the Phylayer features as well as message data field (such as, position, velocity, altitude) and flagsuspicious messages. The model is trained in a supervised fashion using federated learning where the data remains private to the data owner, i.e.: aircraft-ground station without data being explicitly sent to the cloud server. The server only receives the learned parameters for inference, there by training the entire model on the edge, thus preserving data-privacyand potential adversarial attacks. We aim to achieve a high precision real-time IDS system, with very low false alarm rate for real world deployment
147

Moving Toward Intelligence: A Hybrid Neural Computing Architecture for Machine Intelligence Applications

Bai, Kang Jun 08 June 2021 (has links)
Rapid advances in machine learning have made information analysis more efficient than ever before. However, to extract valuable information from trillion bytes of data for learning and decision-making, general-purpose computing systems or cloud infrastructures are often deployed to train a large-scale neural network, resulting in a colossal amount of resources in use while themselves exposing other significant security issues. Among potential approaches, the neuromorphic architecture, which is not only amenable to low-cost implementation, but can also deployed with in-memory computing strategy, has been recognized as important methods to accelerate machine intelligence applications. In this dissertation, theoretical and practical properties of a hybrid neural computing architecture are introduced, which utilizes a dynamic reservoir having the short-term memory to enable the historical learning capability with the potential to classify non-separable functions. The hybrid neural computing architecture integrates both spatial and temporal processing structures, sidestepping the limitations introduced by the vanishing gradient. To be specific, this is made possible through four critical features: (i) a feature extractor built based upon the in-memory computing strategy, (ii) a high-dimensional mapping with the Mackey-Glass neural activation, (iii) a delay-dynamic system with historical learning capability, and (iv) a unique learning mechanism by only updating readout weights. To support the integration of neuromorphic architecture and deep learning strategies, the first generation of delay-feedback reservoir network has been successfully fabricated in 2017, better yet, the spatial-temporal hybrid neural network with an improved delay-feedback reservoir network has been successfully fabricated in 2020. To demonstrate the effectiveness and performance across diverse machine intelligence applications, the introduced network structures are evaluated through (i) time series prediction, (ii) image classification, (iii) speech recognition, (iv) modulation symbol detection, (v) radio fingerprint identification, and (vi) clinical disease identification. / Doctor of Philosophy / Deep learning strategies are the cutting-edge of artificial intelligence, in which the artificial neural networks are trained to extract key features or finding similarities from raw sensory information. This is made possible through multiple processing layers with a colossal amount of neurons, in a similar way to humans. Deep learning strategies run on von Neumann computers are deployed worldwide. However, in today's data-driven society, the use of general-purpose computing systems and cloud infrastructures can no longer offer a timely response while themselves exposing other significant security issues. Arose with the introduction of neuromorphic architecture, application-specific integrated circuit chips have paved the way for machine intelligence applications in recently years. The major contributions in this dissertation include designing and fabricating a new class of hybrid neural computing architecture and implementing various deep learning strategies to diverse machine intelligence applications. The resulting hybrid neural computing architecture offers an alternative solution to accelerate the neural computations required for sophisticated machine intelligence applications with a simple system-level design, and therefore, opening the door to low-power system-on-chip design for future intelligence computing, what is more, providing prominent design solutions and performance improvements for internet of things applications.
148

Human-Robot Interaction with Pose Estimation and Dual-Arm Manipulation Using Artificial Intelligence

Ren, Hailin 16 April 2020 (has links)
This dissertation focuses on applying artificial intelligence techniques to human-robot interaction, which involves human pose estimation and dual-arm robotic manipulation. The motivating application behind this work is autonomous victim extraction in disaster scenarios using a conceptual design of a Semi-Autonomous Victim Extraction Robot (SAVER). SAVER is equipped with an advanced sensing system and two powerful robotic manipulators as well as a head and neck stabilization system to achieve autonomous safe and effective victim extraction, thereby reducing the potential risk to field medical providers. This dissertation formulates the autonomous victim extraction process using a dual-arm robotic manipulation system for human-robot interaction. According to the general process of Human-Robot Interaction (HRI), which includes perception, control, and decision-making, this research applies machine learning techniques to human pose estimation, robotic manipulator modeling, and dual-arm robotic manipulation, respectively. In the human pose estimation, an efficient parallel ensemble-based neural network is developed to provide real-time human pose estimation on 2D RGB images. A 13-limb, 14-joint skeleton model is used in this perception neural network and each ensemble of the neural network is designed for a specific limb detection. The parallel structure poses two main benefits: (1) parallel ensembles architecture and multiple Graphics Processing Units (GPU) make distributed computation possible, and (2) each individual ensemble can be deployed independently, making the processing more efficient when the detection of only some specific limbs is needed for the tasks. Precise robotic manipulator modeling benefits from the simplicity of the controller design and improves the performance of trajectory following. Traditional system modeling relies on first principles, simplifying assumptions and prior knowledge. Any imperfection in the above could lead to an analytical model that is different from the real system. Machine learning techniques have been applied in this field to pursue faster computation and more accurate estimation. However, a large dataset is always needed for these techniques, while obtaining the data from the real system could be costly in terms of both time and maintenance. In this research, a series of different Generative Adversarial Networks (GANs) are proposed to efficiently identify inverse kinematics and inverse dynamics of the robotic manipulators. One four-Degree-of-Freedom (DOF) robotic manipulator and one six-DOF robotic manipulator are used with different sizes of the dataset to evaluate the performance of the proposed GANs. The general methods can also be adapted to other systems, whose dataset is limited using general machine learning techniques. In dual-arm robotic manipulation, basic behaviors such as reaching, pushing objects, and picking objects up are learned using Reinforcement Learning. A Teacher-Student advising framework is proposed to learn a single neural network to control dual-arm robotic manipulators with previous knowledge of controlling a single robotic manipulator. Simulation and experimental results present the efficiency of the proposed framework compared to the learning process from scratch. Another concern in robotic manipulation is safety constraints. A variable-reward hierarchical reinforcement learning framework is proposed to solve sparse reward and tasks with constraints. A task of picking up and placing two objects to target positions while keeping them in a fixed distance within a threshold is used to evaluate the performance of the proposed method. Comparisons to other state-of-the-art methods are also presented. Finally, all the three proposed components are integrated as a single system. Experimental evaluation with a full-size manikin was performed to validate the concept of applying artificial intelligence techniques to autonomous victim extraction using a dual-arm robotic manipulation system. / Doctor of Philosophy / Using mobile robots for autonomous victim extraction in disaster scenarios reduces the potential risk to field medical providers. This dissertation focuses on applying artificial intelligence techniques to this human-robot interaction task involving pose estimation and dual-arm manipulation for victim extraction. This work is based on a design of a Semi-Autonomous Victim Extraction Robot (SAVER). SAVER is equipped with an advanced sensing system and two powerful robotic manipulators as well as a head and neck stabilization system attached on an embedded declining stretcher to achieve autonomous safe and effective victim extraction. Therefore, the overall research in this dissertation addresses: human pose estimation, robotic manipulator modeling, and dual-arm robotic manipulation for human pose adjustment. To accurately estimate the human pose for real-time applications, the dissertation proposes a neural network that could take advantages of multiple Graphics Processing Units (GPU). Considering the cost in data collection, the dissertation proposed novel machine learning techniques to obtain the inverse dynamic model and the inverse kinematic model of the robotic manipulators using limited collected data. Applying safety constraints is another requirement when robots interacts with humans. This dissertation proposes reinforcement learning techniques to efficiently train a dual-arm manipulation system not only to perform the basic behaviors, such as reaching, pushing objects and picking up and placing objects, but also to take safety constraints into consideration in performing tasks. Finally, the three components mentioned above are integrated together as a complete system. Experimental validation and results are discussed at the end of this dissertation.
149

A Deep Learning Approach to Side-Channel Analysis of Cryptographic Hardware

Ramezanpour, Keyvan 08 September 2020 (has links)
With increased growth of the Internet of Things (IoT) and physical exposure of devices to adversaries, a class of physical attacks called side-channel analysis (SCA) has emerged which compromises the security of systems. While security claims of cryptographic algorithms are based on the complexity of classical cryptanalysis attacks, they exclude information leakage by implementations on hardware platforms. Recent standardization processes require assessment of hardware security against SCA. In this dissertation, we study SCA based on deep learning techniques (DL-SCA) as a universal analysis toolbox for assessing the leakage of secret information by hardware implementations. We demonstrate that DL-SCA techniques provide a trade-off between the amount of prior knowledge of a hardware implementation and the amount of measurements required to identify the secret key. A DL-SCA based on supervised learning requires a training set, including information about the details of the hardware implementation, for a successful attack. Supervised learning has been widely used in power analysis (PA) to recover the secret key with a limited size of measurements. We demonstrate a similar trend in fault injection analysis (FIA) by introducing fault intensity map analysis with a neural network key distinguisher (FIMA-NN). We use dynamic timing simulations on an ASIC implementation of AES to develop a statistical model for biased fault injection. We employ the model to train a convolutional neural network (CNN) key distinguisher that achieves a superior efficiency, nearly $10times$, compared to classical FIA techniques. When a priori knowledge of the details of hardware implementations is limited, we propose DL-SCA techniques based on unsupervised learning, called SCAUL, to extract the secret information from measurements without requiring a training set. We further demonstrate the application of reinforcement learning by introducing the SCARL attack, to estimate a proper model for the leakage of secret data in a self-supervised approach. We demonstrate the success of SCAUL and SCARL attacks using power measurements from FPGA implementations of the AES and Ascon authenticated ciphers, respectively, to recover entire 128-bit secret keys without using any prior knowledge or training data. / Doctor of Philosophy / With the growth of the Internet of Things (IoT) and mobile devices, cryptographic algorithms have become essential components of end-to-end cybersecurity. A cryptographic algorithm is a highly nonlinear mathematical function which often requires a secret key. Only the user who knows the secret key is able to interpret the output of the algorithm to find the encoded information. Standardized algorithms are usually secure against attacks in which in attacker attempts to find the secret key given a set of input data and the corresponding outputs of the algorithm. The security of algorithms is defined based on the complexity of known cryptanalysis attacks to recover the secret key. However, a device executing a cryptographic algorithm leaks information about the secret key. Several studies have shown that the behavior of a device, such as power consumption, electromagnetic radiation and the response to external stimulation provide additional information to an attacker that can be exploited to find the secret key with much less effort than cryptanalysis attacks. Hence, exposure of devices to adversaries has enabled the class of physical attacks called side-channel analysis (SCA). In SCA, an attacker attempts to find the secret key by observing the behavior of the device executing the algorithm. Recent government and industry standardization processes, which choose future cryptographic algorithms, require assessing the security of hardware implementations against SCA in addition to the algorithmic level security of the cryptographic systems. The difficulty of an SCA attack depends on the details of a hardware implementation and the form of information leakage on a particular device. The diversity of possible hardware implementations and platforms, including application specific integrated circuits (ASIC), field programmable gate arrays (FPGA) and microprocessors, has hindered the development of a unified measure of complexity in SCA attacks. In this research, we study SCA with deep learning techniques (DL-SCA) as a universal methodology to evaluate the leakage of secret information by hardware platforms. We demonstrate that DL-SCA based on supervised learning can be considered as a generalization of classical SCA techniques, and is able to find the secret information with a limited size of measurements. However, supervised learning techniques require a training set of data that includes information about the details of hardware implementation. We propose unsupervised learning techniques that are able to find the secret key even without knowledge of the details of the hardware. We further demonstrate the ability of reinforcement learning in estimating a proper model for data leakage in a self-supervised approach. We demonstrate that DL-SCA techniques are able to find the secret information even if the timing of data leakage in measurements are random. Hence, traditional countermeasures are unable to protect a hardware implementation against DL-SCA attacks. We propose a unified countermeasure to protect the hardware implementations against a wide range of SCA attacks.
150

Improving the Accessibility of Arabic Electronic Theses and Dissertations (ETDs) with Metadata and Classification

Abdelrahman, Eman January 2021 (has links)
Much research work has been done to extract data from scientific papers, journals, and articles. However, Electronic Theses and Dissertations (ETDs) remain an unexplored genre of data in the research fields of natural language processing and machine learning. Moreover, much of the related research involved data that is in the English language. Arabic data such as news and tweets have begun to receive some attention in the past decade. However, Arabic ETDs remain an untapped source of data despite the vast number of benefits to students and future generations of scholars. Some ways of improving the browsability and accessibility of data include data annotation, indexing, parsing, translation, and classification. Classification is essential for the searchability and management of data, which can be manual or automated. The latter is beneficial when handling growing volumes of data. There are two main roadblocks to performing automatic subject classification on Arabic ETDs. The first is the unavailability of a public corpus of Arabic ETDs. The second is the Arabic language’s linguistic complexity, especially in academic documents. This research presents the Otrouha project, which aims at building a corpus of key metadata of Arabic ETDs as well as providing a methodology for their automatic subject classification. The first goal is aided by collecting data from the AskZad Digital Library. The second goal is achieved by exploring different machine learning and deep learning techniques. The experiments’ results show that deep learning using pretrained language models gave the highest classification performance, indicating that language models significantly contribute to natural language understanding. / M.S. / An Electronic Thesis or Dissertation (ETD) is an openly-accessible electronic version of a graduate student’s research thesis or dissertation. It documents their main research effort that has taken place and becomes available in the University Library instead of a paper copy. Over time, collections of ETDs have been gathered and made available online through different digital libraries. ETDs are a valuable source of information for scholars and researchers, as well as librarians. With the digitalization move in most Middle Eastern Universities, the need to make Arabic ETDs more accessible significantly increases as their numbers increase. One of the ways to improve their accessibility and searchability is through providing automatic classification instead of manual classification. This thesis project focuses on building a corpus of metadata of Arabic ETDs and building a framework for their automatic subject classification. This is expected to pave the way for more exploratory research on this valuable genre of data.

Page generated in 0.0733 seconds