261 |
High-security image encryption based on a novel simple fractional-order memristive chaotic system with a single unstable equilibrium pointRahman, Z.S.A., Jasim, B.H., Al-Yasir, Yasir I.A., Abd-Alhameed, Raed 14 January 2022 (has links)
Yes / Fractional-order chaotic systems have more complex dynamics than integer-order chaotic systems. Thus, investigating fractional chaotic systems for the creation of image cryptosystems has been popular recently. In this article, a fractional-order memristor has been developed, tested, numerically analyzed, electronically realized, and digitally implemented. Consequently, a novel simple three-dimensional (3D) fractional-order memristive chaotic system with a single unstable equilibrium point is proposed based on this memristor. This fractional-order memristor is connected in parallel with a parallel capacitor and inductor for constructing the novel fractional-order memristive chaotic system. The system’s nonlinear dynamic characteristics have been studied both analytically and numerically. To demonstrate the chaos behavior in this new system, various methods such as equilibrium points, phase portraits of chaotic attractor, bifurcation diagrams, and Lyapunov exponent are investigated. Furthermore, the proposed fractional-order memristive chaotic system was implemented using a microcontroller (Arduino Due) to demonstrate its digital applicability in real-world applications. Then, in the application field of these systems, based on the chaotic behavior of the memristive model, an encryption approach is applied for grayscale original image encryption. To increase the encryption algorithm pirate anti-attack robustness, every pixel value is included in the secret key. The state variable’s initial conditions, the parameters, and the fractional-order derivative values of the memristive chaotic system are used for contracting the keyspace of that applied cryptosystem. In order to prove the security strength of the employed encryption approach, the cryptanalysis metric tests are shown in detail through histogram analysis, keyspace analysis, key sensitivity, correlation coefficients, entropy analysis, time efficiency analysis, and comparisons with the same fieldwork. Finally, images with different sizes have been encrypted and decrypted, in order to verify the capability of the employed encryption approach for encrypting different sizes of images. The common cryptanalysis metrics values are obtained as keyspace = 2648, NPCR = 0.99866, UACI = 0.49963, H(s) = 7.9993, and time efficiency = 0.3 s. The obtained numerical simulation results and the security metrics investigations demonstrate the accuracy, high-level security, and time efficiency of the used cryptosystem which exhibits high robustness against different types of pirate attacks.
|
262 |
Evaluating performance of homomorphic encryption applied on delta encoding / Prestandautvärdering av homomoprhisk kryptering applicerat på delta enkodningDani, János Richard January 2022 (has links)
Homomorphic encryption is an encryption scheme that allows for simple operations on encrypted data. These operations are mainly boolean circuits combined into more complexarithmetic operations, rotations, and others. Homomorphic encryption was first implemented in 2009, and in the following decade, many different versions emerged. The early schemes were mainly proof of concepts. In contrast, the later schemes have been used in practical applications such as databases where queries were done without any decryption on the server. Another practical example is genome sequencing which benefits from utilizing supercomputers but the data is very sensitive. With the help of homomorphic encryption it was shown that this could be done without having any unencrypted data on the server. While these applications have different success rates, a field that have not been investigated is the use of homomorphic encryption with delta encoding. Delta encoding is a method of encoding a set (e.g., a set of characters) such that the set is expressed as an original (a starting point) with deltas (changes). A typical use case for delta encoding is: A user wants to edit a file located on the cloud and to save bandwidth, the user could encode a delta locally. This delta could then be sent to the cloud service and decoded together with the original version to create the updated version on the cloud. However, there is a privacy infringement risk with this. When standard encryption is used, the delta and the original must be decrypted to perform the decoding. If a malicious actor gains access to the data on the cloud machine, they would then have access to unencrypted data. For example, the cloud provider could snoop on its customers or have a policy that lets them use the users’ data. Homomorphic encryption would make it much harder since the data would still be encrypted while the decoding is performed. However, homomorphic encryption comes with a great overhead and is complex to tune, even with today’s libraries.To investigate the combination of homomorphic encryption and delta encoding, a testbed is created where a client and server act as user and cloud provider. Thetest bed consists of different configurations of delta encodings and homomorphic encryption schemes running different test cases. The configurations range from non-encrypted to homomorphically encrypted with different kinds of delta encodings to investigate the performance overhead of utilizing homomorphic encryption. The different tests are created to show what kind of overhead can be expected in different scenarios and which operations take the most time. With this testbed and these test cases, the results showed a substantial overhead with using homomorphic encryption. However, many optimizations could be done to increase efficiency and make homomorphic encryption a viable solution. For example, the decoding algorithm could be optimized to use homomorphic operations more efficiently. The tests showed that most of the runtime, when using homomorphic encryption, is on the server. Most of the runtime for the client are one-time operations, which consist of creating keys that can be reused.
|
263 |
Square: A New Family of Multivariate Encryption SchemesClough, Crystal L. 21 July 2009 (has links)
No description available.
|
264 |
RSA, Public-Key Cryptography, and Authentication ProtocolsWright, Moriah E. 11 June 2012 (has links)
No description available.
|
265 |
A Key Management Architecture for Securing Off-Chip Data Transfers on an FPGAGraf, Jonathan 04 August 2004 (has links)
Data security is becoming ever more important in embedded and portable electronic devices. The sophistication of the analysis techniques used by attackers is amazingly advanced. Digital devices' external interfaces to memory and communications interfaces to other digital devices are vulnerable to malicious probing and examination. A hostile observer might be able to glean important details of a device's design from such an interface analysis. Defensive measures for protecting a device must therefore be even more sophisticated and robust.
This thesis presents an architecture that acts as a secure wrapper around an embedded application on a Field Programmable Gate Array (FPGA). The architecture includes functional units that serve to authenticate a user over a secure serial interface, create a key with multiple layers of security, and encrypt an external memory interface using that key. In this way, the wrapper protects all of the digital interfaces of the embedded application from external analysis. Cryptographic methods built into the system include an RSA-related secure key exchange, the Secure Hash Algorithm, a certificate storage system, and the Data Encryption Standard algorithm in counter mode. The principles behind the encrypted external memory interface and the secure authentication interface can be adjusted as needed to form a secure wrapper for a wide variety of embedded FPGA applications. / Master of Science
|
266 |
Incorporating Obfuscation Techniques in Privacy Preserving Database-Driven Dynamic Spectrum Access SystemsZabransky, Douglas Milton 11 September 2018 (has links)
Modern innovation is a driving force behind increased spectrum crowding. Several studies performed by the National Telecommunications and Information Administration (NTIA), Federal Communications Commission (FCC), and other groups have proposed Dynamic Spectrum Access (DSA) as a promising solution to alleviate spectrum crowding. The spectrum assignment decisions in DSA will be made by a centralized entity referred to as as spectrum access system (SAS); however, maintaining spectrum utilization information in SAS presents privacy risks, as sensitive Incumbent User (IU) operation parameters are required to be stored by SAS in order to perform spectrum assignments properly. These sensitive operation parameters may potentially be compromised if SAS is the target of a cyber attack or an inference attack executed by a secondary user (SU).
In this thesis, we explore the operational security of IUs in SAS-based DSA systems and propose a novel privacy-preserving SAS-based DSA framework, Suspicion Zone SAS (SZ-SAS), the first such framework which protects against both the scenario of inference attacks in an area with sparsely distributed IUs and the scenario of untrusted or compromised SAS. We then define modifications to the SU inference attack algorithm, which demonstrate the necessity of applying obfuscation to SU query responses. Finally, we evaluate obfuscation schemes which are compatible with SZ-SAS, verifying the effectiveness of such schemes in preventing an SU inference attack. Our results show SZ-SAS is capable of utilizing compatible obfuscation schemes to prevent the SU inference attack, while operating using only homomorphically encrypted IU operation parameters. / Master of Science / Dynamic Spectrum Access (DSA) allows users to opportunistically access spectrum resources which were previously reserved for use by specified parties. This spectrum sharing protocol has been identified as a potential solution to the issue of spectrum crowding. This sharing will be accomplished through the use of a centralized server, known as a spectrum access system (SAS). However, current SAS-based DSA proposals require users to submit information such as location and transmission properties to SAS. The privacy of these users is of the utmost importance, as many existing users in these spectrum bands are military radars and other users for which operational security is pivotal. Storing the information for these users in a central database can be an major privacy issue, as this information could be leaked if SAS is compromised by a malicious party. Additionally, malicious secondary users (SUs) may perform an inference attack, which could also reveal the location of these military radars. In this thesis, we demonstrate a SAS-framework, SZ-SAS, which allows SAS to function without direct knowledge of user information. We also propose techniques for mitigating the inference attack which are compatible with SZ-SAS
|
267 |
Inclusion of Priority Access in a Privacy-preserving ESC-based DSA SystemLu, Chang 21 August 2018 (has links)
According to the Federal Communications Commission's rules and recommendations set forth for the 3.5 GHz Citizens Broadband Radio Service, a three-tiered structure shall govern the newly established shared wireless band. The three tiers are comprised of three different levels of spectrum access; Incumbent Access, Priority Access and General Authorized Access. In accordance and fulfillment with this dynamic spectrum access framework, we present the inclusion of Priority Access tier into a two-tiered privacy-preserving ESC-based dynamic spectrum access system. / Master of Science / With the development of wireless communication technologies, the number of wireless communication reliant applications has been increasing. Most of these applications require dedicated spectrum frequencies as communication channels. As such, the radio frequency spectrum, utilized and allocated for these wireless applications, is depleting. This problem can be alleviated by adopting dynamic spectrum access schemes. The current static spectrum allocation scheme assigns designated spectrum frequencies to specific users. This static frequency management approach leads to inefficient frequency utilization as the occupation of frequency channels may vary depending upon time periods. Dynamic spectrum access schemes allow unlicensed users opportunistic access to vacant spectrum spaces. Thus, the adoption of these spectrum sharing schemes will increase the efficiency of spectrum utilization, and slow down the spectrum depletion. However, the design and implementation of these schemes face different challenges. These spectrum sharing systems need to guarantee the privacy of the involved parties while maintaining specific functionalities required and recommended by the Federal Communications Commission. In this thesis, we present the inclusion of a three-tiered frame, approved by the Federal Communications Commission, into a privacy-preserving dynamic spectrum system.
|
268 |
Hiding Decryption Latency in Intel SGX using Metadata PredictionTalapkaliyev, Daulet 20 January 2020 (has links)
Hardware-Assisted Trusted Execution Environment technologies have become a crucial component in providing security for cloud-based computing. One of such hardware-assisted countermeasures is Intel Software Guard Extension (SGX). Using additional dedicated hardware and a new set of CPU instructions, SGX is able to provide isolated execution of code within trusted hardware containers called enclaves. By utilizing private encrypted memory and various integrity authentication mechanisms, it can provide confidentiality and integrity guarantees to protected data. In spite of dedicated hardware, these extra layers of security add a significant performance overhead. Decryption of data using secret OTPs, which are generated by modified Counter Mode Encryption AES blocks, results in a significant latency overhead that contributes to the overall SGX performance loss. This thesis introduces a metadata prediction extension to SGX based on local metadata releveling and prediction mechanisms. Correct prediction of metadata allows to speculatively precompute OTPs, which can be immediately used in decryption of incoming ciphertext data. This hides a significant part of decryption latency and results in faster SGX performance without any changes to the original SGX security guarantees. / Master of Science / With the exponential growth of cloud computing, where critical data processing is happening on third-party computer systems, it is important to ensure data confidentiality and integrity against third-party access. Sometimes that may include not only external attackers, but also insiders, like cloud computing providers themselves. While software isolation using Virtual Machines is the most common method of achieving runtime security in cloud computing, numerous shortcomings of software-only countermeasures force companies to demand extra layers of security. Recently adopted general purpose hardware-assisted technology like Intel Software Guard Extension (SGX) add that extra layer of security at the significant performance overhead. One of the major contributors to the SGX performance overhead is data decryption latency. This work proposes a novel algorithm to speculatively predict metadata that is used during decryption. This allows the processor to hide a significant portion of decryption latency, thus improving the overall performance of Intel SGX without compromising security.
|
269 |
Essays on Coercion and Signaling in CyberspaceJun, Dahsol January 2024 (has links)
This dissertation explores how coercive diplomacy works in cyberspace through three interrelated papers, each titled, Coercion in Cyberspace: A Model of Encryption Via Extortion, Variation in Coercion: Costly Signals That Also Undermine Attack Effectiveness, and Seeking Clarity In A Domain of Deception: Signaling and Indices in Cyberspace. As more strategic actors seek to employ cyber weapons as an important part of their military arsenal, refining the theory of cyber coercion is becoming more important in understanding coercive diplomacy and crisis dynamics in cyberspace. Although existing cyber conflict literature argues that cyber weapons make poor tools of coercion, the current theory does not necessarily match important empirical instances of successful coercion using cyber means, such as the ransomware and data extortion. This dissertation seeks to close this gap between theory and practice by specifying the conditions under which cyber coercion works. Relatedly, the dissertation also explores the conditions under which costly signaling works in conveying such coercive threats.
The first paper presents a formal model of cyber coercion that relies on data encryption, as a means of explaining why cyber weapons often rely on a different coercive logic. Coercion in International Relations is often conceptualized as the threat to hurt used in reserve, applied in settings such as the use of nuclear weapons or strategic bombing. However, history is ripe with instances of a different logic of coercion that relies on the application of costs up front, followedby a promise to stop. Application of such a coercive logic can be seen in instances such as sanctions, hostage-taking, and sieges. Existing literature argues that cyber weapons make poor tools of coercion, however this only examines cyber weapons under the first logic. However, cyber weapons, when examined under the second logic, are often quite successful, as the prevalence of the ransomware threat demonstrates. This paper specifies the conditions under which coercion using data encryption works in light of the second logic, and what unique commitment problems can undermine coercion in this situation. By applying costs up front, some cyber weapons resolve a key strategic dilemma in which conveying specific information regarding how the attack will unfold can allow the defender to take mitigations that render the planned attack useless.
The second paper complements the first paper by presenting a formal model that explores the first logic, and specifies the conditions under which cyber coercion relying on the threat to hurt used in reserve works. A key theory in the existing cyber conflict literature argues that cyber weapons make poor tools of coercion due to the “cyber commitment problem," in which a coercer faces a tradeoff between the need to credibly demonstrate specific capability to follow through with a threat, versus the propensity of the defender to use such information to adopt countermeasures. This tradeoff is not necessarily unique to cyberspace, but applicable to technologies that rely on degrees of deception for attack effectiveness, such as submarine warfare. I present a formal model motivated by cyber weapons but applicable to a broad range of technologies in International Relations, showing that the severity of this tradeoff is not constant but varies depending on exogenous factors, such as the probability that a defensive countermeasure can successfully neutralize a threatened attack. When the probability is high, this shrinks the range of costly signals that a coercer can send to maintain a separating equilibrium, however it does not necessarily mean that costly signaling is not possible. This paper formalizes and expands the logic behind the “cyber commitment problem" and shows that coercion can sometimes work even under the first logic.
The third paper examines the role of indices – or observations that are believed to be hard to deceive as opposed to overt signals of intent – in coercive diplomacy and crisis communications in cyberspace. Because actors acting in and through cyberspace have yet to come to a clear shared meaning as to what certain actions in cyberspace conveys in terms of intent and/or resolve, the tendency to instead rely on independent observation and assessment of “indices” to interpret these actions are more pronounced in cyber conflict. This paper uses cybersecurity advisories routinely published by the Cybersecurity and Infrastructure Security Agency (CISA) to examine what kinds of indices were used by the U.S. government to make assessments about an attacker’s intent regarding restraint or escalation. Interestingly, the same kind of cyber attack, for example the malicious compromise of a water utilities facility, is interpreted differently as escalatory or accommodative depending on consideration of “situational indices" such as the larger geopolitical context and attribution to a particular state actor, beyond the technical facts. This paper assesses that indices are being used too broadly, even when they can be manipulated easily or are linked to perceptions and biases instead of facts. Such practices can lead to situations where the same costly signal sent by the sender in the context of coercive diplomacy or crisis communications can be interpreted differently by the receiver depending on the suite of indices they are relying on, raising the risk of misperception and crisis escalation in cyberspace.
|
270 |
Fair Comparison of ASIC Performance for SHA-3 FinalistsZuo, Yongbo 22 June 2012 (has links)
In the last few decades, secure algorithms have played an irreplaceable role in the protection of private information, such as applications of AES on modems, as well as online bank transactions. The increasing application of secure algorithms on hardware has made implementations on ASIC benchmarks extremely important. Although all kinds of secure algorithms have been implemented into various devices, the effects from different constraints on ASIC implementation performance have never been explored before.
In order to analyze the effects from different constraints for secure algorithms, SHA-3 finalists, which includes Blake, Groestl, Keccak, JH, and Skein, have been chosen as the ones to be implemented for experiments in this thesis.
This thesis has first explored the effects of different synthesis constraints on ASIC performance, such as the analysis of performance when it is constrained for frequency, or maximum area, etc. After that, the effects of choosing various standard libraries were tested, for instance, the performance of UMC 130nm and IBM 130nm standard libraries have been compared. Additionally, the effects of different technologies have been analyzed, such as 65nm, 90nm, 130nm and 180nm of UMC libraries. Finally, in order to further understand the effects, experiments for post-layout analysis has been explored. While some algorithms remain unaffected by floor plan shapes, others have shown preference for a specific shape, such as JH, which shows a 12% increase in throughput/area with a 1:2 rectangle compared to a square.
Throughout this thesis, the effects of different ASIC implementation factors have been comprehensively explored, as well as the details of the methodology, metrics, and the framework of the experiments. Finally, detailed experiment results and analysis will be discussed in the following chapters. / Master of Science
|
Page generated in 0.0182 seconds