Spelling suggestions: "subject:"bnetwork 2security."" "subject:"bnetwork bsecurity.""
351 |
Security challenges within Software Defined NetworksAhmed, Haroon, Sund, Gabriel January 2014 (has links)
A large amount of today's communication occurs within data centers where a large number of virtual servers (running one or more virtual machines) provide service providers with the infrastructure needed for their applications and services. In this thesis, we will look at the next step in the virtualization revolution, the virtualized network. Software-defined networking (SDN) is a relatively new concept that is moving the field towards a more software-based solution to networking. Today when a packet is forwarded through a network of routers, decisions are made at each router as to which router is the next hop destination for the packet. With SDN these decisions are made by a centralized SDN controller that decides upon the best path and instructs the devices along this path as to what action each should perform. Taking SDN to its extreme minimizes the physical network components and increases the number of virtualized components. The reasons behind this trend are several, although the most prominent are simplified processing and network administration, a greater degree of automation, increased flexibility, and shorter provisioning times. This in turn leads to a reduction in operating expenditures and capital expenditures for data center owners, which both drive the further development of this technology. Virtualization has been gaining ground in the last decade. However, the initial introduction of virtualization began in the 1970s with server virtualization offering the ability to create several virtual server instances on one physical server. Today we already have taken small steps towards a virtualized network by virtualization of network equipment such as switches, routers, and firewalls. Common to virtualization is that it is in early stages all of the technologies have encountered trust issues and general concerns related to whether software-based solutions are as rugged and reliable as hardwarebased solutions. SDN has also encountered these issues, and discussion of these issues continues among both believers and skeptics. Concerns about trust remain a problem for the growing number of cloud-based services where multitenant deployments may lead to loss of personal integrity and other security risks. As a relatively new technology, SDN is still immature and has a number of vulnerabilities. As with most software-based solutions, the potential for security risks increases. This thesis investigates how denial-of-service (DoS) attacks affect an SDN environment and a singlethreaded controller, described by text and via simulations. The results of our investigations concerning trust in a multi-tenancy environment in SDN suggest that standardization and clear service level agreements are necessary to consolidate customers’ confidence. Attracting small groups of customers to participate in user cases in the initial stages of implementation can generate valuable support for a broader implementation of SDN in the underlying infrastructure. With regard to denial-of-service attacks, our conclusion is that hackers can by target the centralized SDN controller, thus negatively affect most of the network infrastructure (because the entire infrastructure directly depends upon a functioning SDN controller). SDN introduces new vulnerabilities, which is natural as SDN is a relatively new technology. Therefore, SDN needs to be thoroughly tested and examined before making a widespread deployment. / Dagens kommunikation sker till stor del via serverhallar där till stor grad virtualiserade servermiljöer förser serviceleverantörer med infrastukturen som krävs för att driva dess applikationer och tjänster. I vårt arbete kommer vi titta på nästa steg i denna virtualiseringsrevolution, den om virtualiserade nätverk. mjukvarudefinierat nätverk (eng. Software-defined network, eller SDN) kallas detta förhållandevis nya begrepp som syftar till mjukvarubaserade nätverk. När ett paket idag transporteras genom ett nätverk tas beslut lokalt vid varje router vilken router som är nästa destination för paketet, skillnaden i ett SDN nätverk är att besluten istället tas utifrån ett fågelperspektiv där den bästa vägen beslutas i en centraliserad mjukvaruprocess med överblick över hela nätverket och inte bara tom nästa router, denna process är även kallad SDN kontroll. Drar man uttrycket SDN till sin spets handlar det om att ersätta befintlig nätverksutrustning med virtualiserade dito. Anledningen till stegen mot denna utveckling är flera, de mest framträdande torde vara; förenklade processer samt nätverksadministration, större grad av automation, ökad flexibilitet och kortare provisionstider. Detta i sin tur leder till en sänkning av löpande kostnader samt anläggningskostnader för serverhallsinnehavare, något som driver på utvecklingen. Virtualisering har sedan början på 2000-talet varit på stark frammarsch, det började med servervirtualisering och förmågan att skapa flertalet virtualiserade servrar på en fysisk server. Idag har vi virtualisering av nätverksutrustning, såsom switchar, routrar och brandväggar. Gemensamt för all denna utveckling är att den har i tidigt stadie stött på förtroendefrågor och överlag problem kopplade till huruvida mjukvarubaserade lösningar är likvärdigt robusta och pålitliga som traditionella hårdvarubaserade lösningar. Detta problem är även något som SDN stött på och det diskuteras idag flitigt bland förespråkare och skeptiker. Dessa förtroendefrågor går på tvären mot det ökande antalet molnbaserade tjänster, typiska tjänster där säkerheten och den personliga integriten är vital. Vidare räknar man med att SDN, liksom annan ny teknik medför vissa barnsjukdomar såsom kryphål i säkerheten. Vi kommer i detta arbete att undersöka hur överbelastningsattacker (eng. Denial-of-Service, eller DoS-attacker) påverkar en SDN miljö och en singel-trådig kontroller, i text och genom simulering. Resultatet av våra undersökningar i ämnet SDN i en multitenans miljö är att standardisering och tydliga servicenivåavtal behövs för att befästa förtroendet bland kunder. Att attrahera kunder för att delta i mindre användningsfall (eng. user cases) i ett inledningsskede är också värdefullt i argumenteringen för en bredare implementering av SDN i underliggande infrastruktur. Vad gäller DoS-attacker kom vi fram till att det som hackare går att manipulera en SDN infrastruktur på ett sätt som inte är möjligt med dagens lösningar. Till exempel riktade attacker mot den centraliserade SDN kontrollen, slår man denna kontroll ur funktion påverkas stora delar av infrastrukturen eftersom de är i ett direkt beroende av en fungerande SDN kontroll. I och med att SDN är en ny teknik så öppnas också upp nya möjligheter för angrepp, med det i åtanke är det viktigt att SDN genomgår rigorösa tester innan större implementation.
|
352 |
Relay Racing with X.509 Mayflies : An Analysis of Certificate Replacements and Validity Periods in HTTPS Certificate Logs / Stafettlöpning med X.509-dagsländor : En Analys av Certifikatutbyten och Giltighetsperioder i HTTPS-certifikatloggarBruhner, Carl Magnus, Linnarsson, Oscar January 2020 (has links)
Certificates are the foundation of secure communication over the internet as of today. While certificates can be issued with long validity periods, there is always a risk of having them compromised during their lifetime. A good practice is therefore to use shorter validity periods. However, this limits the certificate lifetime and gives less flexibility in the timing of certificate replacements. In this thesis, we use publicly available network logs from Rapid7's Project Sonar to provide an overview of the current state of certificate usage behavior. Specifically, we look at the Let's Encrypt mass revocation event in March 2020, where millions of certificates were revoked with just five days notice. In general, we show how this kind of datasets can be used, and as a deeper exploration we analyze certificate validity, lifetime and use of certificates with overlapping validity periods, as well as discuss how our findings relate to industry standard and current security trends. Specifically, we isolate automated certificate services such as Let's Encrypt and cPanel to see how their certificates differ in characteristics from other certificates in general. Based on our findings, we propose a set of rules to help improve the trust in certificate usage and strengthen security online, introducing an Always secure policy aligning certificate validity with revocation time limits in order to replace revocation requirements and overcoming the fact that mobile devices today ignore this very important security feature. To round things off, we provide some ideas for further research based on our findings and what we see possible with datasets such as the one researched in this thesis.
|
353 |
Un système de surveillance et détection de menaces utilisant le traitement de flux comme une fonction virtuelle pour le Big Data / A monitoring and threat detection system using stream processing as a virtual function for Big DataAndreoni Lopez, Martin Esteban 06 June 2018 (has links)
La détection tardive des menaces à la sécurité entraîne une augmentation significative du risque de dommages irréparables, invalidant toute tentative de défense. En conséquence, la détection rapide des menaces en temps réel est obligatoire pour l'administration de la sécurité. De plus, la fonction de virtualisation de la fonction réseau (NFV) offre de nouvelles opportunités pour des solutions de sécurité efficaces et à faible coût. Nous proposons un système de détection de menaces rapide et efficace basé sur des algorithmes de traitement de flux et d'apprentissage automatique. Les principales contributions de ce travail sont : i) un nouveau système de détection des menaces de surveillance basé sur le traitement en continu, ii) deux ensembles de données, d'abord un ensemble de données de sécurité synthétiques contenant à la fois du trafic légitime et malveillant, et le deuxième, une semaine de trafic réel d'un opérateur de télécommunications à Rio de Janeiro, au Brésil, iii) un algorithme de pré-traitement de données, un algorithme de normalisation et un algorithme de sélection de caractéristiques rapides basé sur la corrélation entre des variables, iv) une fonction de réseau virtualisé dans une plate-forme Open Source pour fournir un service de détection des menaces en temps réel, v) placement quasi-optimal des capteurs grâce à une heuristique proposée pour positionner stratégiquement les capteurs dans l'infrastructure du réseau, avec un nombre minimal de capteurs, et enfin vi) un algorithme glouton qui alloue à la demande une séquence de fonctions de réseau virtuel. / The late detection of security threats causes a significant increase in the risk of irreparable damages, disabling any defense attempt. As a consequence, fast real-time threat detection is mandatory for security administration. In addition, Network Function Virtualization (NFV) provides new opportunities for efficient and low-cost security solutions. We propose a fast and efficient threat detection system based on stream processing and machine learning algorithms. The main contributions of this work are i) a novel monitoring threat detection system based on streaming processing, ii) two datasets, first a dataset of synthetic security data containing both legitimate and malicious traffic, and the second, a week of real traffic of a telecommunications operator in Rio de Janeiro, Brazil, iii) a data pre-processing algorithm, a normalizing algorithm and an algorithm for fast feature selection based on the correlation between variables, iv) a virtualized network function in an Open source Platform for providing a real-time threat detection service, v) near-optimal placement of sensors through a proposed heuristic for strategically positioning sensors in the network infrastructure, with a minimum number of sensors, and finally vi) a greedy algorithm that allocates on demand a sequence of virtual network functions.
|
354 |
Reputace zdrojů škodlivého provozu / Reputation of Malicious Traffic SourcesBartoš, Václav January 2019 (has links)
An important part of maintaining network security is collecting and processing information about cyber threats, both from network operator's own detection tools and from third parties. A commonly used type of such information are lists of network entities (IP addresses, domains, URLs, etc.) which were identified as malicious. However, in many cases, the simple binary distinction between malicious and non-malicious entities is not sufficient. It is beneficial to keep other supplementary information for each entity, which describes its malicious activities, and also a summarizing score, which evaluates its reputation numerically. Such a score allows for quick comprehension of the level of threat the entity poses and allows to compare and sort entities. The goal of this work is to design a method for such summarization. The resulting score, called Future Maliciousness Probability (FMP score), is a value between 0 and 1, assigned to each suspicious network entity, expressing the probability that the entity will do some kind of malicious activity in a near future. Therefore, the scoring is based of prediction of future attacks. Advanced machine learning methods are used to perform the prediction. Their input is formed by previously received alerts about security events and other relevant data related to the entity. The method of computing the score is first described in a general way, usable for any kind of entity and input data. Then a more concrete version is presented for scoring IPv4 address by utilizing alerts from an alert sharing system and supplementary data from a reputation database. This variant is then evaluated on a real world dataset. In order to get enough amount and quality of data for this dataset, a part of the work is also dedicated to the area of security analysis of network data. A framework for analysis of flow data, NEMEA, and several new detection methods are designed and implemented. An open reputation database, NERD, is also implemented and described in this work. Data from these systems are then used to evaluate precision of the predictor as well as to evaluate selected use cases of the scoring method.
|
355 |
Evaluation of Network-Layer Security Technologies for Cloud Platforms / Utvärdering av säkerhetsteknologier för nätverksskiktet i molnplattformarDuarte Coscia, Bruno Marcel January 2020 (has links)
With the emergence of cloud-native applications, the need to secure networks and services creates new requirements concerning automation, manageability, and scalability across data centers. Several solutions have been developed to overcome the limitations of the conventional and well established IPsec suite as a secure tunneling solution. One strategy to meet these new requirements has been the design of software-based overlay networks. In this thesis, we assess the deployment of a traditional IPsec VPN solution against a new secure overlay mesh network called Nebula. We conduct a case study by provisioning an experimental system to evaluate Nebula in four key areas: reliability, security, manageability, and performance. We discuss the strengths of Nebula and its limitations for securing inter-service communication in distributed cloud applications. In terms of reliability, the thesis shows that Nebula falls short to meet its own goals of achieving host-to-host connectivity when attempting to traverse specific firewalls and NATs. With respect to security, Nebula provides certificate-based authentication and uses current and fast cryptographic algorithms and protocols from the Noise framework. Regarding manageability, Nebula is a modern solution with a loosely coupled design that allows scalability with cloud-ready features and easier deployment than IPsec. Finally, the performance of Nebula clearly shows an overhead for being a user-space software application. However, the overhead can be considered acceptable in certain server-to-server microservice interactions and is a fair trade-off for its ease of management in comparison to IPsec. / Med framväxten av molninbyggda applikationer skapar behovet av säkra nätverk och tjänster nya krav på automatisering, hanterbarhet och skalbarhet över datacenter. Flera lösningar har utvecklats för att övervinna begränsningarna i den konventionella och väletablerade IPsec-sviten som en säker tunnellösning. En strategi för att möta dessa nya krav har varit utformningen av mjukvarubaserade överläggsnätverk. I den här avhandlingen bedömer vi implementeringen av en traditionell IPsec VPN-lösning mot ett nytt säkert överläggsmeshnätverk som kallas Nebula. Vi genomför en fallstudie genom att bygga upp ett ett experimentellt system för att utvärdera Nebula inom fyra nyckelområden: tillförlitlighet, säkerhet, hanterbarhet och prestanda. Vi diskuterar styrkan i Nebula och dess begränsningar för att säkra kommunikation mellan tjänster i distribuerade molnapplikationer. När det gäller tillförlitlighet visar avhandlingen att Nebula inte uppfyller sina egna mål om att uppnå värd-tillvärd- anslutning när man försöker korsa specifika brandväggar och NAT. När det gäller säkerhet tillhandahåller Nebula certifikatbaserad autentisering och använder aktuella och snabba kryptografiska algoritmer och protokoll från Noise-ramverket. När det gäller hanterbarhet är Nebula en modern lösning med en löst kopplad design som möjliggör skalbarhet med molnklara funktioner och enklare distribution än IPsec. Slutligen visar prestandan hos Nebula tydligt en overhead för att vara en användarutrymme-programvara. Dock kan kostnaderna anses vara acceptabla i vissa server-till-server-mikroserviceinteraktioner och är en rättvis avvägning om vi tar i betraktande dess enkla hantering jämfört med IPsec.
|
356 |
Implementation of Data Path Credentials for High-Performance Capabilities-Based NetworksVasudevan, Kamlesh T 01 January 2009 (has links) (PDF)
Capabilities-based networks present a fundamental shift in the security design of network architectures. Instead of permitting the transmission of packets from any source to any destination, routers deny forwarding by default. For a successful transmission, packets need to positively identify themselves and their permissions to the router. A major challenge for a high performance implementation of such a network is an efficient design of the credentials that are carried in the packet and the verification procedure on the router. A network protocol that implements data path credentials based on Bloom filters is presented in this thesis. Our prototype implementation shows that there is some connection setup cost associated with this type of secure communication. However, once a connection is established, the throughput performance of a capabilities-based connection is similar to that of conventional TCP.
|
357 |
Agile Network Security for Software Defined Edge CloudsOsman, Amr 07 March 2023 (has links)
Today's Internet is seeing a massive shift from traditional client-server applications towards real-time, context-sensitive, and highly immersive applications. The fusion between Cyber-physical systems, The Internet of Things (IoT), Augmented/Virtual-Reality (AR/VR), and the Tactile Internet with the Human-in-the-Loop (TaHIL) means that Ultra-Reliable Low Latency Communication (URLLC) is a key functional requirement.
Mobile Edge Computing (MEC) has emerged as a network architectural paradigm to address such ever-increasing resource demands. MEC leverages networking and computational resource pools that are closer to the end-users at the far edge of the network, eliminating the need to send and process large volumes of data over multiple distant hops at central cloud computing data centers. Multiple 'cloudlets' are formed at the edge, and the access to resources is shared and federated across them over multiple network domains that are distributed over various geographical locations.
However, this federated access comes at the cost of a fuzzy and dynamically-changing network security perimeter because there are multiple sources of mobility. Not only are the end users mobile, but the applications themselves virtually migrate over multiple network domains and cloudlets to serve the end users, bypassing statically placed network security middleboxes and firewalls. This work aims to address this problem by proposing adaptive network security measures that can be dynamically changed at runtime, and are decoupled from the ever-changing network topology. In particular, we: 1) use the state of the art in programmable networking to protect MEC networks from internal adversaries that can adapt and laterally move, 2) Automatically infer application security contexts, and device vulnerabilities, then evolve the network access control policies to segment the network in such a way that minimizes the attack surface with minimal impact on its utility, 3) propose new metrics to assess the susceptibility of edge nodes to a new class of stealthy attacks that bypasses traditional statically placed Intrusion Detection Systems (IDS), and a probabilistic approach to pro-actively protect them.:Acknowledgments
Acronyms & Abbreviations
1 Introduction
1.1 Prelude
1.2 Motivation and Challenges
1.3 Aim and objectives
1.4 Contributions
1.5 Thesis structure
2 Background
2.1 A primer on computer networks
2.2 Network security
2.3 Network softwarization
2.4 Cloudification of networks
2.5 Securing cloud networks
2.6 Towards Securing Edge Cloud Networks
2.7 Summary
I Adaptive security in consumer edge cloud networks
3 Automatic microsegmentation of smarthome IoT networks
3.1 Introduction
3.2 Related work
3.3 Smart home microsegmentation
3.4 Software-Defined Secure Isolation
3.5 Evaluation
3.6 Summary
4 Smart home microsegmentation with user privacy in mind
4.1 Introduction
4.2 Related Work
4.3 Goals and Assumptions
4.4 Quantifying the security and privacy of SHIoT devices
4.5 Automatic microsegmentation
4.6 Manual microsegmentation
4.7 Experimental setup
4.8 Evaluation
4.9 Summary
II Adaptive security in enterprise edge cloud networks
5 Adaptive real-time network deception and isolation
5.1 Introduction
5.2 Related work
5.3 Sandnet’s concept
5.4 Live Cloning and Network Deception
5.5 Evaluation
5.6 Summary
6 Localization of internal stealthy DDoS attacks on Microservices
6.1 Introduction
6.2 Related work
6.3 Assumptions & Threat model
6.4 Mitigating SILVDDoS
6.5 Evaluation
6.6 Summary
III Summary of Results
7 Conclusion
7.1 Main outcomes
7.2 Future outlook
Listings
Bibliography
List of Algorithms
List of Figures
List of Tables
Appendix
|
358 |
ENHANCING SECURITY IN DOCKER WEB SERVERS USING APPARMOR AND BPFTRACEAvigyan Mukherjee (15306883) 19 April 2023 (has links)
<p>Dockerizing web servers has gained significant popularity due to its lightweight containerization approach, enabling rapid and efficient deployment of web services. However, the security of web server containers remains a critical concern. This study proposes a novel approach to enhance the security of Docker-based web servers using bpftrace to trace Nginx and Apache containers under attack, identifying abnormal syscalls, connections, shared library calls, and file accesses from normal ones. The gathered metrics are used to generate tailored AppArmor profiles for improved mandatory access control policies and enhanced container security. BPFtrace is a high-level tracing language allowing for real-time analysis of system events. This research introduces an innovative method for generating AppArmor profiles by utilizing BPFtrace to monitor system alerts, creating customized security policies tailored to the specific needs of Docker-based web servers. Once the profiles are generated, the web server container is redeployed with enhanced security measures in place. This approach increases security by providing granular control and adaptability to address potential threats. The evaluation of the proposed method is conducted using CVE’s found in the open source literature affecting nginx and apache web servers that correspond to the classification system that was created. The Apache and Nginx containers was attacked with Metasploit, and benchmark tests including ltrace evaluation in accordance with existing literature were conducted. The results demonstrate the effectiveness of the proposed approach in mitigating security risks and strengthening the overall security posture of Docker-based web servers. This is achieved by limiting memcpy and memset shared library calls identified using bpftrace and applying rlimits in 9 AppArmor to limit their rate to normal levels (as gauged during testing) and deny other harmful file accesses and syscalls. The study’s findings contribute to the growing body of knowledge on container security and offer valuable insights for practitioners aiming to develop more secure web server deployments using Docker. </p>
|
359 |
Implementing a Zero Trust Environmentfor an Existing On-premises Cloud Solution / Implementering av en Zero Trust miljö för en existerande påplats molnlösningPero, Victor, Ekman, Linus January 2023 (has links)
This thesis project aimed to design and implement a secure system for handling and safeguarding personal data. The purpose of the work is to prevent unauthorized actors from gaining access to systems and data. The proposed solution is a Zero Trust architecture which emphasizes strong security measures by design and strict access controls. The system must provide minimal access for users and should be integrated with the existing cloud-based infrastructure. The result is a system that leverages Keycloak for identity management and authentication services, GitLab to provide a code hosting solution, GPG for commit signing, and OpenVPN for network access. Through the utilization of Gitlab, Keycloak and OpenVPN the system achieved a comprehensive design for data protection, user authentication and network security. This report also highlights alternative methods, future enhancements and potential improvements to the completed system. / Målet med denna rapport är att designa och implementera ett säkert system för hantering och skydd av personlig data. Syftet med arbetet är att förhindra obehöriga att få tillgång till system och data. Den föreslagna lösningen är en Zero Trustarkitektur som betonar skärpta säkerhetsåtgärder genom design och strikta åtkomstkontroller. Systemet måste ge minimal åtkomst för användare som brukar det och integreras med den befintliga molnbaserade infrastrukturen. Resultatet är ett system som använder Keycloak för hantering av identiteter och autentisering, GitLab för att tillhandahålla ett kodarkiv där användare kan ladda upp sin kod, GPG för att signera commits, och OpenVPN för nätverksåtkomst. Genom användning av GitLab, Keycloak och OpenVPN uppnådde systemet en omfattande design för dataskydd, användarautentisering och nätverkssäkerhet. Denna rapport nämner också alternativa metoder, framtida och potentiella förbättringar av det färdiga systemet.
|
360 |
PROACTIVE VULNERABILITY IDENTIFICATION AND DEFENSE CONSTRUCTION -- THE CASE FOR CANKhaled Serag Alsharif (8384187) 25 July 2023 (has links)
<p>The progressive integration of microcontrollers into various domains has transformed traditional mechanical systems into modern cyber-physical systems. However, the beginning of this transformation predated the era of hyper-interconnectedness that characterizes our contemporary world. As such, the principles and visions guiding the design choices of this transformation had not accounted for many of today's security challenges. Many designers had envisioned their systems to operate in an air-gapped-like fashion where few security threats loom. However, with the hyper-connectivity of today's world, many CPS find themselves in uncharted territory for which they are unprepared.</p>
<p><br></p>
<p>An example of this evolution is the Controller Area Network (CAN). CAN emerged during the transformation of many mechanical systems into cyber-physical systems as a pivotal communication standard, reducing vehicle wiring and enabling efficient data exchange. CAN's features, including noise resistance, decentralization, error handling, and fault confinement mechanisms, made it a widely adopted communication medium not only in transportation but also in diverse applications such as factories, elevators, medical equipment, avionic systems, and naval applications.</p>
<p><br></p>
<p>The increasing connectivity of modern vehicles through CD players, USB sticks, Bluetooth, and WiFi access has exposed CAN systems to unprecedented security challenges and highlighted the need to bolster their security posture. This dissertation addresses the urgent need to enhance the security of modern cyber-physical systems in the face of emerging threats by proposing a proactive vulnerability identification and defense construction approach and applying it to CAN as a lucid case study. By adopting this proactive approach, vulnerabilities can be systematically identified, and robust defense mechanisms can be constructed to safeguard the resilience of CAN systems.</p>
<p><br></p>
<p>We focus on developing vulnerability scanning techniques and innovative defense system designs tailored for CAN systems. By systematically identifying vulnerabilities before they are discovered and exploited by external actors, we minimize the risks associated with cyber-attacks, ensuring the longevity and reliability of CAN systems. Furthermore, the defense mechanisms proposed in this research overcome the limitations of existing solutions, providing holistic protection against CAN threats while considering its performance requirements and operational conditions.</p>
<p><br></p>
<p>It is important to emphasize that while this dissertation focuses on CAN, the techniques and rationale used here could be replicated to secure other cyber-physical systems. Specifically, due to CAN's presence in many cyber-physical systems, it shares many performance and security challenges with those systems, which makes most of the techniques and approaches used here easily transferrable to them. By accentuating the importance of proactive security, this research endeavors to establish a foundational approach to cyber-physical systems security and resiliency. It recognizes the evolving nature of cyber-physical systems and the specific security challenges facing each system in today's hyper-connected world and hence focuses on a single case study. </p>
|
Page generated in 0.0478 seconds