• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 66
  • 66
  • 66
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

INVESTIGATING ESCAPE VULNERABILITIES IN CONTAINER RUNTIMES

Michael J Reeves (10797462) 14 May 2021 (has links)
Container adoption has exploded in recent years with over 92% of companies using containers as part of their cloud infrastructure. This explosion is partly due to the easy orchestration and lightweight operations of containers compared to traditional virtual machines. As container adoption increases, servers hosting containers become more attractive targets for adversaries looking to gain control of a container to steal trade secrets, exfiltrate customer data, or hijack hardware for cryptocurrency mining. To control a container host, an adversary can exploit a vulnerability that enables them to escape from the container onto the host. This kind of attack is termed a “container escape” because the adversary is able to execute code on the host from within the isolated container. The vulnerabilities which allow container escape exploits originate from three main sources: (1) container profile misconfiguration, (2) the host’s Linux kernel, and (3) the container runtime. While the first two cases have been studied in the literature, to the best of the author’s knowledge, there is, at present, no work that investigates the impact of container runtime vulnerabilities. To fill this gap, a survey over container runtime vulnerabilities was conducted investigating 59 CVEs for 11 different container runtimes. As CVE data alone would limit the investigation analysis, the investigation focused on the 28 CVEs with publicly available proof of concept (PoC) exploits. To facilitate this analysis, each exploit was broken down into a series of high-level commands executed by the adversary called “steps”. Using the steps of each CVE’s corresponding exploit, a seven-class taxonomy of these 28 vulnerabilities was constructed revealing that 46% of the CVEs had a PoC exploit which enabled a container escape. Since container escapes were the most frequently occurring category, the nine corresponding PoC exploits were further analyzed to reveal that the underlying cause of these container escapes was a host component leaking into the container. This survey provides new insight into system vulnerabilities exposed by container runtimes thereby informing the direction of future research.
32

Blockchain-Based Security Framework for the Internet of Things and Home Networks

Diego Miguel Mendez Mena (10711719) 27 April 2021 (has links)
During recent years, attacks on Internet of Things (IoT) devices have grown significantly. Cyber criminals have been using compromised IoT machines to attack others, which include critical internet infrastructure systems. Latest attacks increase the urgency for the information security research community to develop new strategies and tools to safeguard vulnerable devices at any level. Millions of intelligent things are now part of home-based networks that are usually disregarded by solutions platforms, but not by malicious entities.<br>Therefore, the following document presents a comprehensive framework that aims to secure home-based networks, but also corporate and service provider ones. The proposed solution utilizes first-hand information from different actors from different levels to create a decentralized privacy-aware Cyber Threat Information (CTI) sharing network, capable of automate network responses by relying on the secure properties of the blockchain powered by the Ethereum algorithms.
33

The DVL in the Details: Assessing Differences in Decoy, Victim, and Law Enforcement Chats with Online Sexual Predators

Tatiana Renae Ringenberg (11203656) 29 July 2021 (has links)
Online sexual solicitors are individuals who deceptively earn the trust of minors online with the goal of eventual sexual gratification. Despite the prevalence of online solicitation, conversations in the domain are difficult to acquire due to the sensitive nature of the data. As a result, researchers studying online solicitors often study conversations between solicitors and decoys which are publicly available online. However, researchers have begun to believe such conversations are not representative of solicitor-victim conversations. Decoys and law enforcement are restricted in that they are unable to initiate contact, suggest meeting, or begin sexual conversations with an offender. Additionally decoys and law enforcement officers both have a goal of gathering evidence which means they often respond positively in contexts which would normally be considered awkward or inappropriate. Multiple researchers have suggested differences may exist between offender-victim and offender-decoy conversations and yet little research has sought to identify the differences and similarities between those talking to solicitors. In this study, the author identifies differences between decoys, officers, and victims within the manipulative process used by online solicitors to entrap victims which is known as grooming. The author looks at differences which occur within grooming stages and strategies within the grooming stages. The research in this study has implications for the data choices of future researchers in this domain. Additionally, this research may be used to inform the training process of officers who will engage in online sex stings.
34

FEASIBILITY STUDY USING BLOCKCHAIN TO IMPLEMENT PROOF OF LOCATION

Kristina D. Lister-Gruesbeck (5930723) 17 January 2019 (has links)
The purpose of this thesis is to determine the feasibility of using blockchain to implement proof of location. There has been an increasing demand for a way to create a validated proof of location that is economical, and easy to deploy as well as portable. There are several reasons for an increased demand in this technology including the ever-increasing number of mobile gamers that have been able to spoof their location successfully, the increasing number of on demand package shipments from companies such as Amazon, and the desire to reduce the occurrence of medical errors as well as holding hospitals accountable for their errors. Additional reasons that this technology is gaining popularity and increasing in demand is due the continually increasing number of lost baggage claims that airlines are receiving, as well as insurance companies desire to reduce the number of fraud cases that are related to high-value goods as well as increasing the probability of their recovery. Within the past year, there has been an extensive amount of research as well as work that has been completed to create an irrefutable method of location verification, which will permit a user to be able to create time-stamped documentation validating that they were at a particular location at a certain day and time. Additionally, the user is then permitted to release the information at a later date and time that is convenient for them. This research was completed using a Raspberry Pi 3B, a Raspberry Pi 3B+, two virtual Raspberry Pi’s as well as two virtual servers in which the goal was to download, and setup either Ethereum and/or Tendermint Blockchain on each piece of equipment. After completely synchronizing the blockchain it be used to store the verified location data that been time-stamped. There was a variety of issues that were encountered during the setup and installation of the blockchains on the equipment including overclocking processors, which negatively affected the computational abilities of the devices as well as causing overheating and surges in voltage as well as a variety of software and hardware incompatibilities. These issues when looked at individually appear to not have much of an impact on the results of this research but when combined together it is obvious that they reduced the results that could be obtained. In conclusion, the combination of hardware and software issues when combined with the temperature and voltage issues that were due to the overheating of the processor resulted in several insurmountable issues that could not be overcome. There are several recommendations for continuing this work including presyncing the blockchain using a computer, using a device that has more functionality and computational abilities, connecting a cooling device such as a fan or adding a heat sink, increasing the available power supply, utilizing an externally power hard drive for data storage, recreate this research with the goal in mind of determining what process or application was causing the high processor usage, or creating a distributed system that utilizes both physical and virtual equipment to reduce the amount of work on one type of device.
35

BUILDING FAST, SCALABLE, LOW-COST, AND SAFE RDMA SYSTEMS IN DATACENTERS

Shin-yeh Tsai (7027667) 16 October 2019 (has links)
<div>Remote Direct Memory Access, or RDMA, is a technology that allows one computer server to direct access the memory of another server without involving its CPU. Compared with traditional network technologies, RDMA offers several benefits including low latency, high throughput, and low CPU utilization. These features are especially attractive to datacenters, and because of this, datacenters have started to adopt RDMA in production scale in recent years.</div><div>However, RDMA was designed for confined, single-tenant, High-Performance-Computing (HPC) environments. Many of its design choices do not fit datacenters well, and it cannot be readily used by datacenter applications. To use RDMA, current datacenter applications have to build customized software stacks and fine-tune their performance. In addition, RDMA offers limited scalability and does not have good support for resource sharing or protection across different applications.</div><div>This dissertation sets out to seek solutions that can solve issues of RDMA in a systematic way and makes it more suitable for a wide range of datacenter applications.</div><div>Our first task is to make RDMA more scalable, easier to use, and have better support for safe resource sharing in datacenters. For this purpose, we propose to add an indirection layer on top of native RDMA to virtualize its low-level abstraction into a high-level one. This indirection layer safely manages RDMA resources for different datacenter applications and also provide a means for better scalability.</div><div>After making RDMA more suitable for datacenter environments, our next task is to build applications that can exploit all the benefits from (our improved) RDMA. We designed a set of systems that store data in remote persistent memory and let client machines access these data through pure one-sided RDMA communication. These systems lower monetary and energy cost compared to traditional datacenter data stores (because no processor is needed at remote persistent memory), while achieving good performance and reliability.</div><div>Our final task focuses on a completely different and so far largely overlooked one — security implications of RDMA. We discovered several key vulnerabilities in the one-sided communication pattern and in RDMA hardware. We exploited one of them to create a novel set of remote side-channel attacks, which we are able to launch on a widely used RDMA system with real RDMA hardware.</div><div>This dissertation is one of the initial efforts in making RDMA more suitable for datacenter environments from scalability, usability, cost, and security aspects. We hope that the systems we built as well as the lessons we learned can be helpful to future networking and systems researchers and practitioners.</div>
36

Um modelo discricionário de delegação e revogação / A discretionary model of delegation and revocation

Negrello, Fabio 14 May 2007 (has links)
Orientador: Jacques Wainer / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-08T19:38:38Z (GMT). No. of bitstreams: 1 Negrello_Fabio_M.pdf: 913255 bytes, checksum: eed18216c9a5ecc1e0eadf834fed5bc5 (MD5) Previous issue date: 2007 / Resumo: Esta dissertação apresenta um modelo discricionário de delegação que permite controlar a formação de cadeias de delegações, tanto através da limitação no comprimento de tais cadeias, como através da definição de condições para utilização e aceitação de novas delegações. Juntamente com o mecanismo de delegação proposto, é apresentado um mecanismo de revogação que considera o máximo comprimento de cada cadeia de delegações, e a relação de força entre delegações, permitindo assim que os sujeitos existentes permaneçam com o maior conjunto de direitos após uma revogação. Uma das principais vantagens em relação à definição de condições associadas à cada delegação é possibilidade de reforçar restrições por conteúdo e contexto. Enquanto o controle de acesso por conteúdo permite que o acesso a determinado objeto, ou recurso, seja controlado com base em atributos e características do próprio objeto em questão, o controle de acesso por contexto considera características de contexto relativas ao sistema como um todo, ou referentes ao contexto em que o usuário solicitou determinado acesso. Será apresentado um mecanismo que permite a utilização deste tipo de informação na definição de condições em delegações. Será apresentado um mecanismo para definição de proibições, que torna possível proibir que usuários utilizem determinadas direitos, mesmo que estes usuários tenham recebido, tais direitos através de delegaçõesde outros usuários do sistema. Através da utilização de condições também é possível a definição de delegações temporais, que são delegações que devem ser consideradas válidas somente durante determinados períodos de tempo, ou enquanto condições de dependência em relação a outras delegações forem atendidas, como será discutido. Finalmente, será apresentado um arcabouço de um servidor de autorizações, que permitiu avaliar o modelo proposto. Neste arcabouço foram implementados os principais algoritmos apresentados, e foi formulada uma arquitetura unificada para criação e revogação de delegações, bem como para verificação de autorizações / Abstract: This thesis presents a model of delegation that makes it possible to control the creation of delegation chains, both by limiting the lenght of such chains, and by defining restrictions for the use and acceptance of new delegations. Together with the proposed delegation mechanism, it is presented a revocation mechanism that considers the maximum length of each delegation chain, and the strength relation between delegations, allowing the existing subjects to retain the maximum set of rights after a revocation. One of the biggest advantages regarding the definition of conditions associated with each delegation is the possibility of enforcing context and content based restrictions. While the content based access control allows the access to a specific object to be controlled based on its attributes and characteristics, the context based access control considers context information related to the system as a whole, or regarding the context in which a user made an access request. It will be presented a mechanism that allows the use of this type of information in the definition of conditions in delegations. A prohibition mechanism will be presented, which prevents users from using certain rights, even though these users have received such rights through other users delegations. As it will be discussed, it is also possible, through the use of conditions, to define temporal delegations, which are delegations that must be considered valid only during specific periods of time, or while dependency condition regarding other delegations are met. Finally, it will be presented a prototype of an authorization server, that was used to validate the proposed model. In this prototype, the main algorithms were implemented, and a unified architecture was formulated both for the creation and recation of delegations, as well as for the verification of authorizations / Mestrado / Mestre em Ciência da Computação
37

Anomaly Detection Techniques for the Protection of Database Systems against Insider Threats

Asmaa Mohamed Sallam (6387488) 15 May 2019 (has links)
The mitigation of insider threats against databases is a challenging problem since insiders often have legitimate privileges to access sensitive data. Conventional security mechanisms, such as authentication and access control, are thus insufficient for the protection of databases against insider threats; such mechanisms need to be complemented with real-time anomaly detection techniques. Since the malicious activities aiming at stealing data may consist of multiple steps executed across temporal intervals, database anomaly detection is required to track users' actions across time in order to detect correlated actions that collectively indicate the occurrence of anomalies. The existing real-time anomaly detection techniques for databases can detect anomalies in the patterns of referencing the database entities, i.e., tables and columns, but are unable to detect the increase in the sizes of data retrieved by queries; neither can they detect changes in the users' data access frequencies. According to recent security reports, such changes are indicators of potential data misuse and may be the result of malicious intents for stealing or corrupting the data. In this thesis, we present techniques for monitoring database accesses and detecting anomalies that are considered early signs of data misuse by insiders. Our techniques are able to track the data retrieved by queries and sequences of queries, the frequencies of execution of periodic queries and the frequencies of referencing the database tuples and tables. We provide detailed algorithms and data structures that support the implementation of our techniques and the results of the evaluation of their implementation.<br>
38

Ranking of Android Apps based on Security Evidences

Ayush Maharjan (9728690) 07 January 2021 (has links)
<p>With the large number of Android apps available in app stores such as Google Play, it has become increasingly challenging to choose among the apps. The users generally select the apps based on the ratings and reviews of other users, or the recommendations from the app store. But it is very important to take the security into consideration while choosing an app with the increasing security and privacy concerns with mobile apps. This thesis proposes different ranking schemes for Android apps based on security apps evaluated from the static code analysis tools that are available. It proposes the ranking schemes based on the categories of evidences reported by the tools, based on the frequency of each category, and based on the severity of each evidence. The evidences are gathered, and rankings are generated based on the theory of Subjective Logic. In addition to these ranking schemes, the tools are themselves evaluated against the Ghera benchmark. Finally, this work proposes two additional schemes to combine the evidences from difference tools to provide a combined ranking.</p>
39

A Systematic Framework For Analyzing the Security and Privacy of Cellular Networks

Syed Rafiul Hussain (5929793) 16 January 2020 (has links)
<div>Cellular networks are an indispensable part of a nation's critical infrastructure. They not only support functionality that are critical for our society as a whole (e.g., business, public-safety message dissemination) but also positively impact us at a more personal level by enabling applications that often improve our quality of life (e.g., navigation). Due to deployment constraints and backward compatibility issues, the various cellular protocol versions were not designed and deployed with a strong security and privacy focus. Because of their ubiquitous presence for connecting billions of users and use for critical applications, cellular networks are, however, lucrative attack targets of motivated and resourceful adversaries. </div><div><br></div><div></div><div>In this dissertation, we investigate the security and privacy of 4G LTE and 5G protocol designs and deployments. More precisely, we systematically identify design weaknesses and implementation oversights affecting the critical operations of the networks, and also design countermeasures to mitigate the identified vulnerabilities and attacks. Towards this goal, we developed a systematic model-based testing framework called LTEInspector. LTEInspector can be used to not only identify protocol design weaknesses but also deployment oversights. LTEInspector leverages the combined reasoning capabilities of a symbolic model checker and a cryptographic protocol verifier by combining them in a lazy fashion. We instantiated \system with three critical procedures (i.e., attach, detach, and paging) of 4G LTE. Our analysis uncovered 10 new exploitable vulnerabilities along with 9 prior attacks of 4G LTE all of which have been verified in a real testbed. Since identifying all classes of attacks with a unique framework like \system is nearly impossible, we show that it is possible to identify sophisticated security and privacy attacks by devising techniques specifically tailored for a particular protocol and by leveraging the findings of LTEInspector. As a case study, we analyzed the paging protocol of 4G LTE and the current version of 5G, and observed that by leveraging the findings from LTEInspector and other side-channel information and by using a probabilistic reasoning technique it is possible to mount sophisticated privacy attacks that can expose a victim device's coarse-grained location information and sensitive identifiers when the adversary is equipped only with the victim's phone number or other soft-identity (e.g., social networking profile). An analysis of LTEInspector's findings shows that the absence of broadcast authentication enables an adversary to mount a wide plethora of security and privacy attacks. We thus develop an attack-agnostic generic countermeasure that provides broadcast authentication without violating any common-sense deployment constraints. Finally, we design a practical countermeasure for mitigating the side-channel attacks in the paging procedure without breaking the backward compatibility.</div>
40

FUZZING HARD-TO-COVER CODE

Hui Peng (10746420) 06 May 2021 (has links)
<div>Fuzzing is a simple yet effect approach to discover bugs by repeatedly testing the target system using randomly generated inputs. In this thesis, we identify several limitations in state-of-the-art fuzzing techniques: (1) the coverage wall issue , fuzzer-generated inputs cannot bypass complex sanity checks in the target programs and are unable to cover code paths protected by such checks; (2) inability to adapt to interfaces to inject fuzzer-generated inputs, one important example of such interface is the software/hardware interface between drivers and their devices; (3) dependency on code coverage feedback, this dependency makes it hard to apply fuzzing to targets where code coverage collection is challenging (due to proprietary components or special software design).</div><div><br></div><div><div>To address the coverage wall issue, we propose T-Fuzz, a novel approach to overcome the issue from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the coverage wall is reached, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program.</div></div><div><br></div><div><div>By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, we found 4 new bugs in previously-fuzzed programs and libraries.</div></div><div><br></div><div><div>To address the inability to adapt to inferfaces, we propose USBFuzz. We target the USB interface, fuzzing the software/hardware barrier. USBFuzz uses device emulation</div><div>to inject fuzzer-generated input to drivers under test, and applies coverage-guided fuzzing to device drivers if code coverage collection is supported from the kernel. In its core, USBFuzz emulates an special USB device that provides data to the device driver (when it performs IO operations). This allows us to fuzz the input space of drivers from the device’s perspective, an angle that is difficult to achieve with real hardware. USBFuzz discovered 53 bugs in Linux (out of which 37 are new, and 36 are memory bugs of high security impact, potentially allowing arbitrary read or write in the kernel address space), one bug in FreeBSD, four bugs (resulting in Blue Screens of Death) in Windows and three bugs (two causing an unplanned restart, one freezing the system) in MacOS.</div></div><div><br></div><div><div>To break the dependency on code coverage feedback, we propose WebGLFuzzer. To fuzz the WebGL interface (a set of JavaScript APIs in browsers allowing high performance graphics rendering taking advantage of GPU acceleration on the device), where code coverage collection is challenging, we introduce WebGLFuzzer, which internally uses a log guided fuzzing technique. WebGLFuzzer is not dependent on code coverage feedback, but instead, makes use of the log messages emitted by browsers to guide its input mutation. Compared with coverage guided fuzzing, our log guided fuzzing technique is able to perform more meaningful mutation under the guidance of the log message. To this end, WebGLFuzzer uses static analysis to identify which argument to mutate or which API call to insert to the current program to fix the internal WebGL program state given a log message emitted by the browser. WebGLFuzzer is under evaluation and so far, it has found 6 bugs, one of which is able to freeze the X-Server.</div></div>

Page generated in 0.0923 seconds