• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Browser Fingerprinting : Exploring Device Diversity to Augment Authentification and Build Client-Side Countermeasures / Empreinte digitale d'appareil : exploration de la diversité des terminaux modernes pour renforcer l'authentification en ligne et construire des contremesures côté client

Laperdrix, Pierre 03 October 2017 (has links)
L'arrivée de l'Internet a révolutionné notre société à l'aube du 21e siècle. Nos habitudes se sont métamorphosées pour prendre en compte cette nouvelle manière de communiquer el de partager avec le monde. Grâce aux technologies qui en constituent ses fondations, le web est une plateforme universelle Que vous utilisiez un PC de bureau sous Windows, un PC portable sous MacOS, un serveur sous Linux ou une tablette sous Android, chacun a les moyens de se connecter à ce réseau de réseaux pour partager avec le monde. La technique dite de Browser fingerprinting est née de celle diversité logicielle et matérielle qui compose nos appareils du quotidien. En exécutant un script dans le navigateur web d'un utilisateur, un serveur peut récupér une très grande quantité d'informations. Il a été démontré qu'il est possible d'identifier de façon unique un appareil en récoltant suffisamment d'informations. L'impact d'une telle approche sur la vie privée des internautes est alors conséquente, car le browser fingerprinting est totalement indépendant des systèmes de traçage connu comme les cookies. Dans celle thèse, nous apportons les contributions suivantes : une analyse de 118 934 empreintes, deux contre-mesures appelées Blink et FPRandom et un protocole d'authentification basé sur le canvas fingerprinting. Le browser fingerprinting est un domaine fascinant qui en est encore à ses balbutiements. Avec cette thèse, nous contribuons à l'écriture des premières pages de son histoire en fournissant une vue d'ensemble du domaine, de ses fondations jusqu'à l'impact des nouvelles technologies du web sur cette technique. Nous nous tournons aussi vers le futur via l'exploration d'une nouvelle facette du domaine afin d'améliorer la sécurité des comptes sur Internet. / Users are presented with an ever-increasing number of choices to connect to the Internet. From desktops, laptops, tablets and smartphones, anyone can find the perfect device that suits his or her needs while factoring mobility, size or processing power. Browser fingerprinting became a reality thanks to the software and hardware diversity that compose every single one of our modem devices. By collecting device-specific information with a simple script running in the browser, a server can fully or partially identify a device on the web and follow it wherever it goes. This technique presents strong privacy implications as it does not require the use of stateful identifiers like cookies that can be removed or managed by the user. In this thesis, we provide the following contributions: an analysis of 118,934 genuine fingerprints to understand the current state of browser fingerprinting, two countermeasures called Blink and FPRandom and a complete protocol based on canvas fingerprinting to augment authentication on the web. Browser fingerprinting is still in its early days. As the web is in constant evolution and as browser vendors keep pushing the limits of what we can do online, the contours of this technique are continually changing. With this dissertation, we shine a light into its inner-workings and its challenges along with a new perspective on how it can reinforce account security.
2

BioSENSE: Biologically-inspired Secure Elastic Networked Sensor Environment

Hassan Eltarras, Rami M. 22 September 2011 (has links)
The essence of smart pervasive Cyber-Physical Environments (CPEs) is to enhance the dependability, security and efficiency of their encompassing systems and infrastructures and their services. In CPEs, interactive information resources are integrated and coordinated with physical resources to better serve human users. To bridge the interaction gap between users and the physical environment, a CPE is instrumented with a large number of small devices, called sensors, that are capable of sensing, computing and communicating. Sensors with heterogeneous capabilities should autonomously organize on-demand and interact to furnish real-time, high fidelity information serving a wide variety of user applications with dynamic and evolving requirements. CPEs with their associated networked sensors promise aware services for smart systems and infrastructures with the potential to improve the quality of numerous application domains, in particular mission-critical infrastructure domains. Examples include healthcare, environment protection, transportation, energy, homeland security, and national defense. To build smart CPEs, Networked Sensor Environments (NSEs) are needed to manage demand-driven sharing of large-scale federated heterogeneous resources among multiple applications and users. We informally define NSE as a tailorable, application agnostic, distributed platform with the purpose of managing a massive number of federated resources with heterogeneous computing, communication, and monitoring capabilities. We perceive the need to develop scalable, trustworthy, cost-effective NSEs. A NSE should be endowed with dynamic and adaptable computing and communication services capable of efficiently running diverse applications with evolving QoS requirements on top of federated distributed resources. NSEs should also enable the development of applications independent of the underlying system and device concerns. To our knowledge, a NSE with the aforementioned capabilities does not currently exist. The large scale of NSEs, the heterogeneous node capabilities, the highly dynamic topology, and the likelihood of being deployed in inhospitable environments pose formidable challenges for the construction of resilient shared NSE platforms. Additionally, nodes in NSE are often resource challenged and therefore trustworthy node cooperation is required to provide useful services. Furthermore, the failure of NSE nodes due to malicious or non-malicious conditions represents a major threat to the trustworthiness of NSEs. Applications should be able to survive failure of nodes and change their runtime structure while preserving their operational integrity. It is also worth noting that the decoupling of application programming concerns from system and device concerns has not received the appropriate attention in most existing wireless sensor network platforms. In this dissertation, we present a Biologically-inspired Secure Elastic Networked Sensor Environment (BioSENSE) that synergistically integrates: (1) a novel bio-inspired construction of adaptable system building components, (2) associative routing framework with extensible adaptable criteria-based addressing of resources, and (3) management of multi-dimensional software diversity and trust-based variant hot shuffling. The outcome is that an application using BioSENSE is able to allocate, at runtime, a dynamic taskforce, running over a federated resource pool that would satisfy its evolving mission requirements. BioSENSE perceives both applications and the NSE itself to be elastic, and allows them to grow or shrink based upon needs and conditions. BioSENSE adopts Cell-Oriented-Architecture (COA), a novel architecture that supports the development, deployment, execution, maintenance, and evolution of NSE software. COA employs mission-oriented application design and inline code distribution to enable adaptability, dynamic re-tasking, and re-programmability. The cell, the basic building block in COA, is the abstraction of a mission-oriented autonomously active resource. Generic cells are spontaneously created by the middleware, then participate in emerging tasks through a process called specialization. Once specialized, cells exhibit application specific behavior. Specialized cells have mission objectives that are being continuously sought, and sensors that are used to monitor performance parameters, mission objectives, and other phenomena of interest. Due to the inherent anonymous nature of sensor nodes, associative routing enables dynamic semantically-rich descriptive identification of NSE resources. As such, associative routing presents a clear departure from most current network addressing schemes. Associative routing combines resource discovery and path discovery into a single coherent role, leading to significant reduction in traffic load and communication latency without any loss of generality. We also propose Adaptive Multi-Criteria Routing (AMCR) protocol as a realization of associative routing for NSEs. AMCR exploits application-specific message semantics, represented as generic criteria, and adapts its operation according to observed traffic patterns. BioSENSE intrinsically exploits software diversity, runtime implementation shuffling, and fault recovery to achieve security and resilience required for mission-critical NSEs. BioSENSE makes NSE software a resilient moving target that : 1) confuses the attacker by non-determinism through shuffling of software component implementations; 2) improves the availability of NSE by providing means to gracefully recover from implementation flaws at runtime; and 3) enhances the software system by survival of the fittest through trust-based component selection in an online software component marketplace. In summary, BioSENSE touts the following advantages: (1) on-demand, online distribution and adaptive allocation of services and physical resources shared among multiple long-lived applications with dynamic missions and quality of service requirements, (2) structural, functional, and performance adaptation to dynamic network scales, contexts and topologies, (3) moving target defense of system software, and (4) autonomic failure recovery. / Ph. D.
3

Resilient Cloud Computing and Services

Fargo, Farah Emad January 2015 (has links)
Cloud Computing is emerging as a new paradigm that aims at delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used it is critical that the security mechanisms are robust and resilient to malicious faults and attacks. Securing cloud is a challenging research problem because it suffers from current cybersecurity problems in computer networks and data centers and additional complexity introduced by virtualizations, multi-tenant occupancy, remote storage, and cloud management. It is widely accepted that we cannot build software and computing systems that are free from vulnerabilities and that cannot be penetrated or attacked. Furthermore, it is widely accepted that cyber resilient techniques are the most promising solutions to mitigate cyberattacks and change the game to advantage defender over attacker. Moving Target Defense (MTD) has been proposed as a mechanism to make it extremely challenging for an attacker to exploit existing vulnerabilities by varying different aspects of the execution environment. By continuously changing the environment (e.g. Programming language, Operating System, etc.) we can reduce the attack surface and consequently, the attackers will have very limited time to figure out current execution environment and vulnerabilities to be exploited. In this dissertation, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on MTD and autonomic computing. The proposed research will utilize the following capabilities: Software Behavior Obfuscation (SBO), replication, diversity, and Autonomic Management (AM). SBO employs spatiotemporal behavior hiding or encryption and MTD to make software components change their implementation versions and resources randomly to avoid exploitations and penetrations. Diversity and random execution is achieved by using AM that will randomly "hot" shuffling multiple functionally-equivalent, behaviorally-different software versions at runtime (e.g., the software task can have multiple versions implemented in a different language and/or run on a different platform). The execution environment encryption will make it extremely difficult for an attack to disrupt normal operations of cloud. In this work, we evaluated the performance overhead and effectiveness of the proposed ARCM approach to secure and protect a wide range of cloud applications such as MapReduce and scientific and engineering applications.
4

Contributions on approximate computing techniques and how to measure them / Contributions sur les techniques de computation approximée et comment les mesurer

Rodriguez Cancio, Marcelino 19 December 2017 (has links)
La Computation Approximée est basée dans l'idée que des améliorations significatives de l'utilisation du processeur, de l'énergie et de la mémoire peuvent être réalisées, lorsque de faibles niveaux d'imprécision peuvent être tolérés. C'est un concept intéressant, car le manque de ressources est un problème constant dans presque tous les domaines de l'informatique. Des grands superordinateurs qui traitent les big data d'aujourd'hui sur les réseaux sociaux, aux petits systèmes embarqués à contrainte énergétique, il y a toujours le besoin d'optimiser la consommation de ressources. La Computation Approximée propose une alternative à cette rareté, introduisant la précision comme une autre ressource qui peut à son tour être échangée par la performance, la consommation d'énergie ou l'espace de stockage. La première partie de cette thèse propose deux contributions au domaine de l'informatique approximative: Aproximate Loop Unrolling : optimisation du compilateur qui exploite la nature approximative des données de séries chronologiques et de signaux pour réduire les temps d'exécution et la consommation d'énergie des boucles qui le traitent. Nos expériences ont montré que l'optimisation augmente considérablement les performances et l'efficacité énergétique des boucles optimisées (150% - 200%) tout en préservant la précision à des niveaux acceptables. Primer: le premier algorithme de compression avec perte pour les instructions de l'assembleur, qui profite des zones de pardon des programmes pour obtenir un taux de compression qui surpasse techniques utilisées actuellement jusqu'à 10%. L'objectif principal de la Computation Approximée est d'améliorer l'utilisation de ressources, telles que la performance ou l'énergie. Par conséquent, beaucoup d'efforts sont consacrés à l'observation du bénéfice réel obtenu en exploitant une technique donnée à l'étude. L'une des ressources qui a toujours été difficile à mesurer avec précision, est le temps d'exécution. Ainsi, la deuxième partie de cette thèse propose l'outil suivant : AutoJMH : un outil pour créer automatiquement des microbenchmarks de performance en Java. Microbenchmarks fournissent l'évaluation la plus précis de la performance. Cependant, nécessitant beaucoup d'expertise, il subsiste un métier de quelques ingénieurs de performance. L'outil permet (grâce à l'automatisation) l'adoption de microbenchmark par des non-experts. Nos résultats montrent que les microbencharks générés, correspondent à la qualité des manuscrites par des experts en performance. Aussi ils surpassent ceux écrits par des développeurs professionnels dans Java sans expérience en microbenchmarking. / Approximate Computing is based on the idea that significant improvements in CPU, energy and memory usage can be achieved when small levels of inaccuracy can be tolerated. This is an attractive concept, since the lack of resources is a constant problem in almost all computer science domains. From large super-computers processing today’s social media big data, to small, energy-constraint embedded systems, there is always the need to optimize the consumption of some scarce resource. Approximate Computing proposes an alternative to this scarcity, introducing accuracy as yet another resource that can be in turn traded by performance, energy consumption or storage space. The first part of this thesis proposes the following two contributions to the field of Approximate Computing :Approximate Loop Unrolling: a compiler optimization that exploits the approximative nature of signal and time series data to decrease execution times and energy consumption of loops processing it. Our experiments showed that the optimization increases considerably the performance and energy efficiency of the optimized loops (150% - 200%) while preserving accuracy to acceptable levels. Primer: the first ever lossy compression algorithm for assembler instructions, which profits from programs’ forgiving zones to obtain a compression ratio that outperforms the current state-of-the-art up to a 10%. The main goal of Approximate Computing is to improve the usage of resources such as performance or energy. Therefore, a fair deal of effort is dedicated to observe the actual benefit obtained by exploiting a given technique under study. One of the resources that have been historically challenging to accurately measure is execution time. Hence, the second part of this thesis proposes the following tool : AutoJMH: a tool to automatically create performance microbenchmarks in Java. Microbenchmarks provide the finest grain performance assessment. Yet, requiring a great deal of expertise, they remain a craft of a few performance engineers. The tool allows (thanks to automation) the adoption of microbenchmark by non-experts. Our results shows that the generated microbencharks match the quality of payloads handwritten by performance experts and outperforms those written by professional Java developers without experience in microbenchmarking.
5

Artificial Software Diversification for WebAssembly

Cabrera Arteaga, Javier January 2022 (has links)
WebAssembly has become the fourth official web language, along with HTML, CSS and JavaScript since 2019. WebAssembly allows web browsers to execute existing programs or libraries written in other languages, such as C/C++ and Rust. In addition, WebAssembly evolves to be part of edge-cloud computing platforms. Despite being designed with security as a premise, WebAssembly is not exempt from vulnerabilities. Therefore, potential vulnerabilities and flaws are included in its distribution and execution, highlighting a software monoculture problem. On the other hand, while software diversity has been shown to mitigate monoculture, no diversification approach has been proposed for WebAssembly. This work proposes software diversity as a preemptive solution to mitigate software monoculture for WebAssembly. Besides, we provide implementations for our approaches, including a generic LLVM superdiversifier that potentially extends our ideas to other programming languages. We empirically demonstrate the impact of our approach by providing Randomization and Multivariant Execution (MVE) for WebAssembly. Our results show that our approaches can provide an automated end-to-end solution for the diversification of WebAssembly programs. The main contributions of this work are: We highlight the lack of diversification techniques for WebAssembly through an exhaustive literature review. We provide randomization and multivariant execution for WebAssembly with the implementation of two tools, CROW and MEWE respectively. We include constant inferring as a new code transformation to generate software diversification for WebAssembly. We empirically demonstrate the impact of our technique by evaluating the static and dynamic behavior of the generated diversification. Our approaches harden observable properties commonly used to conduct attacks, such as static code analysis, execution traces, and execution time. / WebAssembly har sedan 2019 blivit det fjärde officiella webbspråket, tillsammans med HTML, CSS och JavaScript sedan 2019. Detta nya språk tillåter webbläsaren att köra befintliga program eller bibliotek skrivna på andra språk, såsom C/C++ och Rust. Dessutom utvecklas WebAssembly för att vara en del av edge-cloud dator -beräkningsplattformar. Trots att WebAssembly är designatd med säkerhet i fokus som en premiss är det inte undantaget från sårbarheter. Därför ingår potentiella sårbarheter och brister i dess distribution och exekvering, vilket belyser ett av problemen med mjukvarumonokultur. MÅ andra sidan, medan mångfald av programvara har visat sig mildra monokultur, har ingen diversifieringsmetod föreslagits för WebAssembly. Denna avhandling föreslår en mångfald av programvara som en förebyggande lösning med syfte att minska programvarumonokultur för WebAssembly. Dessutom tillhandahåller vi implementeringar för våra tillvägagångssätt, däriblandinklusive en generisk LLVM- superdiversifierare som potentiellt utökar våra idéer till andra programmeringsspråk. Vi visar effekten av vårt tillvägagångssätt empiriskt genom att tillhandahålla rRandomisering och mMultivariante Exekvering (MVE) för WebAssembly. Våra resultat visar att våra tillvägagångssätt kan ge en automatiserad end-to-end lösning för diversifiering av program i WebAssembly. Detta arbetes viktigaste bidragen från detta arbete är: Vi lyfter fram bristen på diversifieringstekniker för WebAssembly genom en uttömmande litteraturgenomgång. Vi tillhandahåller en implementationeringen av två verktyg, CROW och MEWE, som genomför tillhandahåller randomisering och multivariant exekvering för WebAssembly. Vi inkluderar “constant inferring” som en ny kod-transformation för att generera mjukvarudiversifiering för WebAssembly. Vi demonstrerar empiriskt effekten av vår teknik genom att utvärdera det statiska och dynamiska beteendet hos den genererade diversifieringen. Våra metoder härdar mot observerbara egenskaper som vanligtvis används för att utföra attacker, som statisk kodanalys, exekveringsspår och exekveringstid. / <p>QC 20220909</p>
6

The State of Software Diversity in the Software Supply Chain of Ethereum Clients

Jönsson, Noak January 2022 (has links)
The software supply chain constitutes all the resources needed to produce a software product. A large part of this is the use of open-source software packages.Although the use of open-source software makes it easier for vast numbers of developers to create new products, they all become susceptible to the same bugs or malicious code introduced in components outside of their control.Ethereum is a vast open-source blockchain network that aims to replace several functionalities provided by centralized institutions.Several software clients are independently developed in different programming languages to maintain the stability and security of this decentralized model.In this report, the software supply chains of the most popular Ethereum clients are cataloged and analyzed.The dependency graphs of Ethereum clients developed in Go, Rust, and Java, are studied. These client are Geth, Prysm, OpenEthereum, Lighthouse, Besu, and Teku.To do so, their dependency graphs are transformed into a unified format.Quantitative metrics are used to depict the software supply chain of the blockchain.The results show a clear difference in the size of the software supply chain required for the execution layer and consensus layer of Ethereum.Varying degrees of software diversity are present in the studied ecosystem. For the Go clients, 97% of Geth dependencies also in the supply chain of Prysm.The Java clients Besu and Teku share 69% and 60% of their dependencies respectively.The Rust clients showing a much more notable amount of diversity, with only 43% and 35% of OpenEthereum and Lighthouse respective dependencies being shared. / Mjukvaruleverantörskedjan sammanfattar
all resurser som behövs för att producera en mjukvaruprodukt.
En stor del av detta är användningen av öppen källkod. Trots att
användningen av öppen källkod tillåter snabb produktion av nya
produkter, utsätter sig alla som använder den för potentiella bug-
gar samt attacker som kan tillföras utanför deras kontroll. Ethere-
um är ett stort blockkedje nätverk baserad på öppen källkod som
försöker konkurrera med tjänster som tidigare endast erbjudits
av centraliserade institutioner. Det finns flera implementationer
av mjukvaran som implementerar Ethereum som alla utvecklas
oberoende av varandra i olika programmerings språk för att öka
stabiliteten och säkerheten av den decentraliserade modellen. I
denna rapport studeras mjukvaruleverantörskedjorna av de mest
populära klienterna som implementerar Ethereum. Dessa utveck-
las i programmeringsspråken Go, Rust, och Java. Dom studerade
klienterna är Geth, Prysm, OpenEthereum, Lighthouse, Besu, och
Teku. För att genomföra studien transformeras klienternas mjuk-
varuleverantörskedjor till ett standardiserat format. Kvantitiva
mått används för att beskriva dessa leverantörskedjor. Resultaten
visar en stor skillnad i storlek av leverantörskedjor för olika
lager i Ethereum. Det visas att det finns en varierande mångfald
av mjukvara baserat på de språk som klienter är utvecklade med.
Leverantörskedjorna av Go klienter sammanfaller i princip fullt,
medan de av Java klienter sammanfaller med en stor majoritet,
och de av Rust klienter visar på mest mångfald i mjukvarupaket. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
7

Autonomous Cyber Defense for Resilient Cyber-Physical Systems

Zhang, Qisheng 09 January 2024 (has links)
In this dissertation research, we design and analyze resilient cyber-physical systems (CPSs) under high network dynamics, adversarial attacks, and various uncertainties. We focus on three key system attributes to build resilient CPSs by developing a suite of the autonomous cyber defense mechanisms. First, we consider network adaptability to achieve the resilience of a CPS. Network adaptability represents the network ability to maintain its security and connectivity level when faced with incoming attacks. We address this by network topology adaptation. Network topology adaptation can contribute to quickly identifying and updating the network topology to confuse attacks by changing attack paths. We leverage deep reinforcement learning (DRL) to develop CPSs using network topology adaptation. Second, we consider the fault-tolerance of a CPS as another attribute to ensure system resilience. We aim to build a resilient CPS under severe resource constraints, adversarial attacks, and various uncertainties. We chose a solar sensor-based smart farm as one example of the CPS applications and develop a resource-aware monitoring system for the smart farms. We leverage DRL and uncertainty quantification using a belief theory, called Subjective Logic, to optimize critical tradeoffs between system performance and security under the contested CPS environments. Lastly, we study system resilience in terms of system recoverability. The system recoverability refers to the system's ability to recover from performance degradation or failure. In this task, we mainly focus on developing an automated intrusion response system (IRS) for CPSs. We aim to design the IRS with effective and efficient responses by reducing a false alarm rate and defense cost, respectively. Specifically, We build a lightweight IRS for an in-vehicle controller area network (CAN) bus system operating with DRL-based autonomous driving. / Doctor of Philosophy / In this dissertation research, we design and analyze resilient cyber-physical systems (CPSs) under high network dynamics, adversarial attacks, and various uncertainties. We focus on three key system attributes to build resilient CPSs by developing a suite of the autonomous cyber defense mechanisms. First, we consider network adaptability to achieve the resilience of a CPS. Network adaptability represents the network ability to maintain its security and connectivity level when faced with incoming attacks. We address this by network topology adaptation. Network topology adaptation can contribute to quickly identifying and updating the network topology to confuse attacks by changing attack paths. We leverage deep reinforcement learning (DRL) to develop CPSs using network topology adaptation. Second, we consider the fault-tolerance of a CPS as another attribute to ensure system resilience. We aim to build a resilient CPS under severe resource constraints, adversarial attacks, and various uncertainties. We chose a solar sensor-based smart farm as one example of the CPS applications and develop a resource-aware monitoring system for the smart farms. We leverage DRL and uncertainty quantification using a belief theory, called Subjective Logic, to optimize critical tradeoffs between system performance and security under the contested CPS environments. Lastly, we study system resilience in terms of system recoverability. The system recoverability refers to the system's ability to recover from performance degradation or failure. In this task, we mainly focus on developing an automated intrusion response system (IRS) for CPSs. We aim to design the IRS with effective and efficient responses by reducing a false alarm rate and defense cost, respectively. Specifically, We build a lightweight IRS for an in-vehicle controller area network (CAN) bus system operating with DRL-based autonomous driving.

Page generated in 0.0815 seconds