• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 3
  • 1
  • Tagged with
  • 92
  • 92
  • 92
  • 34
  • 24
  • 24
  • 23
  • 23
  • 20
  • 17
  • 17
  • 17
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

INVESTIGATING ESCAPE VULNERABILITIES IN CONTAINER RUNTIMES

Michael J Reeves (10797462) 14 May 2021 (has links)
Container adoption has exploded in recent years with over 92% of companies using containers as part of their cloud infrastructure. This explosion is partly due to the easy orchestration and lightweight operations of containers compared to traditional virtual machines. As container adoption increases, servers hosting containers become more attractive targets for adversaries looking to gain control of a container to steal trade secrets, exfiltrate customer data, or hijack hardware for cryptocurrency mining. To control a container host, an adversary can exploit a vulnerability that enables them to escape from the container onto the host. This kind of attack is termed a “container escape” because the adversary is able to execute code on the host from within the isolated container. The vulnerabilities which allow container escape exploits originate from three main sources: (1) container profile misconfiguration, (2) the host’s Linux kernel, and (3) the container runtime. While the first two cases have been studied in the literature, to the best of the author’s knowledge, there is, at present, no work that investigates the impact of container runtime vulnerabilities. To fill this gap, a survey over container runtime vulnerabilities was conducted investigating 59 CVEs for 11 different container runtimes. As CVE data alone would limit the investigation analysis, the investigation focused on the 28 CVEs with publicly available proof of concept (PoC) exploits. To facilitate this analysis, each exploit was broken down into a series of high-level commands executed by the adversary called “steps”. Using the steps of each CVE’s corresponding exploit, a seven-class taxonomy of these 28 vulnerabilities was constructed revealing that 46% of the CVEs had a PoC exploit which enabled a container escape. Since container escapes were the most frequently occurring category, the nine corresponding PoC exploits were further analyzed to reveal that the underlying cause of these container escapes was a host component leaking into the container. This survey provides new insight into system vulnerabilities exposed by container runtimes thereby informing the direction of future research.
42

Low rank methods for network alignment

Huda Nassar (7047152) 15 August 2019 (has links)
Network alignment is the problem of finding a common subgraph between two graphs, and more generally <i>k </i>graphs. The results of network alignment are often used for information transfer, which makes it a powerful tool for deducing information or insight about networks. Network alignment is tightly related to the subgraph isomorphism problem which is known to be NP-hard, this makes the network alignment problem supremely hard in practice. Some algorithms have been devised to approach it via solving a form of a relaxed version of the NP-hard problem or by defining certain heuristic measures. These algorithms normally work well for problems when there is some form of prior known similarity between the nodes of the graphs to be aligned. The absence of such information makes the problem more challenging. In this scenario, these algorithms would often require much more time to finish executing, and even fail sometimes. The version of network alignment that this thesis tackles is the one when such prior similarity measures are absent. In this thesis, we address three versions of network alignment: (i) multimoal network alignment, (ii) standard pairwise network alignment, and (iii) multiple network alignment. A key common component of the algorithms presented in this thesis is exploiting a low rank structure in the network alignment problem and thus producing algorithms that run much faster than classic network alignment algorithms.
43

INTERACTIVE AND INTELLIGENT CAMERA VIEW COMPOSING

Hao Kang (7012772) 14 August 2019 (has links)
<div>Camera view composing, such as photography, has become inseparable from everyday life. Especially with the development of drone technology, the flying mobile camera is accessible and affordable and has been used to take impressive photos. However, the process of acquiring the desired view requires manual searches and adjustments, which are usually time consuming and tedious. The situation is exacerbated with difficulty in the controlling of a mobile camera that has many Degree of Freedom. It becomes complicated to compose a well-framed view, because experience, timing, and aesthetic are all indispensable. Therefore, professional view composing with a mobile camera is not an easy task for most people. Powered by deep learning, recent breakthroughs in artificial intelligence have enabled machines to perform human-level automation in several tasks. The advances in automatic decision-making and autonomous control have the potential to improve the camera view composing process significantly. </div><div><br></div><div>We observe that (a) the human-robot interaction can be more intuitive and natural for photography tasks, and (b) the drone photography tasks can be further automated by learning professional photo taken patterns with data-driven methods. In this work, we present two novel frameworks for drone photography basing on the two observations. First, we demonstrate a multi-touch gesture-controlled gimbaled-drone photography framework-FlyCam. FlyCam abstracts the camera and the drone into a single flying camera object and supports the entire control intuitively on a single mobile device with simple touch gestures. Second, we present a region-of-interest based, reinforced drone photography framework-Dr$^{3}$Cam. Our full automation Dr$^{3}$Cam is built on top of state-of-the-art reinforcement learning research and enables the camera agent to seek for good views and compose visually appealing photos intelligently. Results show that FlyCam can significantly reduce the workload and increase the efficiency in human-robot interaction, while Dr$^{3}$Cam performs human-level view composing automation for drone photography tasks.</div>
44

INFERENCE OF RESIDUAL ATTACK SURFACE UNDER MITIGATIONS

Kyriakos K Ispoglou (6632954) 14 May 2019 (has links)
<div>Despite the broad diversity of attacks and the many different ways an adversary can exploit a system, each attack can be divided into different phases. These phases include the discovery of a vulnerability in the system, its exploitation and the achieving persistence on the compromised system for (potential) further compromise and future access. Determining the exploitability of a system –and hence the success of an attack– remains a challenging, manual task. Not only because the problem cannot be formally defined but also because advanced protections and mitigations further complicate the analysis and hence, raise the bar for any successful attack. Nevertheless, it is still possible for an attacker to circumvent all of the existing defenses –under certain circumstances.</div><div><br></div><div>In this dissertation, we define and infer the Residual Attack Surface on a system. That is, we expose the limitations of the state-of-the-art mitigations, by showing practical ways to circumvent them. This work is divided into four parts. It assumes an attack with three phases and proposes new techniques to infer the Residual Attack Surface on each stage.</div><div><br></div><div>For the first part, we focus on the vulnerability discovery. We propose FuzzGen, a tool for automatically generating fuzzer stubs for libraries. The synthesized fuzzers are target specific, thus resulting in high code coverage. This enables developers to expose and fix vulnerabilities (that reside deep in the code and require initializing a complex state to trigger them), before they can be exploited. We then move to the vulnerability exploitation part and we present a novel technique called Block Oriented Programming (BOP), that automates data-only attacks. Data-only attacks defeat advanced control-flow hijacking defenses such as Control Flow Integrity. Our framework, called BOPC, maps arbitrary exploit payloads into execution traces and encodes them as a set of memory writes. Therefore an attacker’s intended execution “sticks” to the execution flow of the underlying binary and never departs from it. In the third part of the dissertation, we present an extension of BOPC that presents some measurements that give strong indications of what types of exploit payloads are not possible to execute. Therefore, BOPC enables developers to test what data an attacker would compromise and enables evaluation of the Residual Attack Surface to assess an application’s risk. Finally, for the last part, which is to achieve persistence on the compromised system, we present a new technique to construct arbitrary malware that evades current dynamic and behavioral analysis. The desired malware is split into hundreds (or thousands) of little pieces and each piece is injected into a different process. A special emulator coordinates and synchronizes the execution of all individual pieces, thus achieving a “distributed execution” under multiple address spaces. malWASH highlights weaknesses of current dynamic and behavioral analysis schemes and argues for full-system provenance.</div><div><br></div><div>Our envision is to expose all the weaknesses of the deployed mitigations, protections and defenses through the Residual Attack Surface. That way, we can help the research community to reinforce the existing defenses, or come up with new, more effective ones.</div>
45

Collaborative Platform for Computational Thinking Assessment

Arjun Shakdher (6636098) 10 June 2019 (has links)
Computational Thinking (CT) is an integral process of thinking in humans that allows them to solve complex problems efficiently and effectively by breaking down a problem in smaller parts and using abstraction to create generalizable solutions. While the term CT has gained a lot of popularity in current education and research, there is still considerable ambiguity when it comes to defining exactly what CT encompasses. Since the definition and characteristics that make up CT vary so much, it is extremely difficult to measure CT in people. This thesis explains how different industry experts and organizations view CT and describes the importance of developing and integrating such a method of thinking in everyone, not just computer science professionals. The literature review also includes a comprehensive analysis of different tests and tools created to measure CT in people. This study proposes a web-based CT assessment collaborative tool that can be an effective instrument for teachers in assessing CT skills in students who are a part of the Teaching Engineering Concepts to Harness Future Innovators and Technologists (TECHFIT) program funded through NSF DRL-1312215 and NSF DRL-1640178. The vision of this tool is to become a go-to platform for CT assessment where questions collaborated by experts can be used to reliably assess the CT skills of anyone interested in measuring them.
46

BUILDING FAST, SCALABLE, LOW-COST, AND SAFE RDMA SYSTEMS IN DATACENTERS

Shin-yeh Tsai (7027667) 16 October 2019 (has links)
<div>Remote Direct Memory Access, or RDMA, is a technology that allows one computer server to direct access the memory of another server without involving its CPU. Compared with traditional network technologies, RDMA offers several benefits including low latency, high throughput, and low CPU utilization. These features are especially attractive to datacenters, and because of this, datacenters have started to adopt RDMA in production scale in recent years.</div><div>However, RDMA was designed for confined, single-tenant, High-Performance-Computing (HPC) environments. Many of its design choices do not fit datacenters well, and it cannot be readily used by datacenter applications. To use RDMA, current datacenter applications have to build customized software stacks and fine-tune their performance. In addition, RDMA offers limited scalability and does not have good support for resource sharing or protection across different applications.</div><div>This dissertation sets out to seek solutions that can solve issues of RDMA in a systematic way and makes it more suitable for a wide range of datacenter applications.</div><div>Our first task is to make RDMA more scalable, easier to use, and have better support for safe resource sharing in datacenters. For this purpose, we propose to add an indirection layer on top of native RDMA to virtualize its low-level abstraction into a high-level one. This indirection layer safely manages RDMA resources for different datacenter applications and also provide a means for better scalability.</div><div>After making RDMA more suitable for datacenter environments, our next task is to build applications that can exploit all the benefits from (our improved) RDMA. We designed a set of systems that store data in remote persistent memory and let client machines access these data through pure one-sided RDMA communication. These systems lower monetary and energy cost compared to traditional datacenter data stores (because no processor is needed at remote persistent memory), while achieving good performance and reliability.</div><div>Our final task focuses on a completely different and so far largely overlooked one — security implications of RDMA. We discovered several key vulnerabilities in the one-sided communication pattern and in RDMA hardware. We exploited one of them to create a novel set of remote side-channel attacks, which we are able to launch on a widely used RDMA system with real RDMA hardware.</div><div>This dissertation is one of the initial efforts in making RDMA more suitable for datacenter environments from scalability, usability, cost, and security aspects. We hope that the systems we built as well as the lessons we learned can be helpful to future networking and systems researchers and practitioners.</div>
47

THERMAL IMAGE ANALYSIS FOR FAULT DETECTION AND DIAGNOSIS OF PV SYSTEMS

Hyewon Jeon (7523927) 28 April 2020 (has links)
<p>This research presents thermal image analysis for Fault Detection and Diagnosis (FDD) of Photovoltaic (PV) Systems. The traditional manual approach of PV inspection is generally more time-consuming, more dangerous, and less accurate than the modern approach of PV inspection using Aerial Thermography (AT). Thermal image analysis conducted in this research will contribute to utilizing thermography and UAVs for PV inspection by providing a more accurate and cost-efficient diagnosis of PV faults. In this research, PV module inspection was achieved through two steps: (i) PV monitoring and (ii) PV Fault Detection and Diagnosis (FDD). In the PV monitoring stage, PV cells were monitored by aerial thermography. In this stage, the thermal data was acquired for the next step. In the PV FDD stage, hot spot phenomenon and the condition of the PV modules were detected and measured. The proposed research will help with the problems of the modern PV inspection and, eventually, contribute to the performance of PV power generation.</p>
48

Anomaly Detection Techniques for the Protection of Database Systems against Insider Threats

Asmaa Mohamed Sallam (6387488) 15 May 2019 (has links)
The mitigation of insider threats against databases is a challenging problem since insiders often have legitimate privileges to access sensitive data. Conventional security mechanisms, such as authentication and access control, are thus insufficient for the protection of databases against insider threats; such mechanisms need to be complemented with real-time anomaly detection techniques. Since the malicious activities aiming at stealing data may consist of multiple steps executed across temporal intervals, database anomaly detection is required to track users' actions across time in order to detect correlated actions that collectively indicate the occurrence of anomalies. The existing real-time anomaly detection techniques for databases can detect anomalies in the patterns of referencing the database entities, i.e., tables and columns, but are unable to detect the increase in the sizes of data retrieved by queries; neither can they detect changes in the users' data access frequencies. According to recent security reports, such changes are indicators of potential data misuse and may be the result of malicious intents for stealing or corrupting the data. In this thesis, we present techniques for monitoring database accesses and detecting anomalies that are considered early signs of data misuse by insiders. Our techniques are able to track the data retrieved by queries and sequences of queries, the frequencies of execution of periodic queries and the frequencies of referencing the database tuples and tables. We provide detailed algorithms and data structures that support the implementation of our techniques and the results of the evaluation of their implementation.<br>
49

Neural Representation Learning for Semi-Supervised Node Classification and Explainability

Hogun Park (9179561) 28 July 2020 (has links)
<div>Many real-world domains are relational, consisting of objects (e.g., users and pa- pers) linked to each other in various ways. Because class labels in graphs are often only available for a subset of the nodes, semi-supervised learning for graphs has been studied extensively to predict the unobserved class labels. For example, we can pre- dict political views in a partially labeled social graph dataset and get expected gross incomes of movies in an actor/movie graph with a few labels. Recently, advances in representation learning for graph data have made great strides for the semi-supervised node classification. However, most of the methods have mainly focused on learning node representations by considering simple relational properties (e.g., random walk) or aggregating nearby attributes, and it is still challenging to learn complex inter- action patterns in partially labeled graphs and provide explanations on the learned representations. </div><div><br></div><div>In this dissertation, multiple methods are proposed to alleviate both challenges for semi-supervised node classification. First, we propose a graph neural network architecture, REGNN, that leverages local inferences for unlabeled nodes. REGNN performs graph convolution to enable label propagation via high-order paths and predicts class labels for unlabeled nodes. In particular, our proposed attention layer of REGNN measures the role equivalence among nodes and effectively reduces the noise, which is generated during the aggregation of observed labels from distant neighbors at various distances. Second, we also propose a neural network archi- tecture that jointly captures both temporal and static interaction patterns, which we call Temporal-Static-Graph-Net (TSGNet). The architecture learns a latent rep- resentation of each node in order to encode complex interaction patterns. Our key insight is that leveraging both a static neighbor encoder, that learns aggregate neigh- bor patterns, and a graph neural network-based recurrent unit, that captures complex interaction patterns, improves the performance of node classification. Lastly, in spite of better performance of representation learning on node classification tasks, neural network-based representation learning models are still less interpretable than the pre- vious relational learning models due to the lack of explanation methods. To address the problem, we show that nodes with high bridgeness scores have larger impacts on node embeddings such as DeepWalk, LINE, Struc2Vec, and PTE under perturbation. However, it is computationally heavy to get bridgeness scores, and we propose a novel gradient-based explanation method, GRAPH-wGD, to find nodes with high bridgeness efficiently. In our evaluations, our proposed architectures (REGNN and TSGNet) for semi-supervised node classification consistently improve predictive performance on real-world datasets. Our GRAPH-wGD also identifies important nodes as global explanations, which significantly change both predicted probabilities on node classification tasks and k-nearest neighbors in the embedding space after perturbing the highly ranked nodes and re-learning low-dimensional node representations for DeepWalk and LINE embedding methods.</div>
50

Community Recommendation in Social Networks with Sparse Data

Emad Rahmaniazad (9725117) 07 January 2021 (has links)
Recommender systems are widely used in many domains. In this work, the importance of a recommender system in an online learning platform is discussed. After explaining the concept of adding an intelligent agent to online education systems, some features of the Course Networking (CN) website are demonstrated. Finally, the relation between CN, the intelligent agent (Rumi), and the recommender system is presented. Along with the argument of three different approaches for building a community recommendation system. The result shows that the Neighboring Collaborative Filtering (NCF) outperforms both the transfer learning method and the Continuous bag-of-words approach. The NCF algorithm has a general format with two various implementations that can be used for other recommendations, such as course, skill, major, and book recommendations.

Page generated in 0.114 seconds