• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 395
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 42
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Programming Model and Protocols for Reconfigurable Distributed Systems

Arad, Cosmin Ionel January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / <p>QC 20130520</p>
382

Monitoring of large-scale Cluster Computers

Worm, Stefan 13 April 2007 (has links) (PDF)
The constant monitoring of a computer is one of the essentials to be up-to-date about its state. This may seem trivial if one is sitting right in front of it but when monitoring a computer from a certain distance it is not as simple anymore. It gets even more difficult if a large number of computers need to be monitored. Because the process of monitoring always causes some load on the network and the monitored computer itself, it is important to keep these influences as low as possible. Especially for a high-performance cluster that was built from a lot of computers, it is necessary that the monitoring approach works as efficiently as possible and does not influence the actual operations of the supercomputer. Thus, the main goals of this work were, first of all, analyses to ensure the scalability of the monitoring solution for a large computer cluster as well as to prove the functionality of it in practise. To achieve this, a classification of monitoring activities in terms of the overall operation of a large computer system was accomplished first. Thereafter, methods and solutions were presented which are suitable for a general scenario to execute the process of monitoring as efficient and scalable as possible. During the course of this work, conclusions from the operation of an existing cluster for the operation of a new, more powerful system were drawn to ensure its functionality as good as possible. Consequently, a selection of applications from an existing pool of solutions was made to find one that is most suitable for the monitoring of the new cluster. The selection took place considering the special situation of the system like the usage of InfiniBand as the network interconnect. Further on, an additional software was developed which can read and process the different status information of the InfiniBand ports, unaffected by the vendor of the hardware. This functionality, which so far had not been available in free monitoring applications, was exemplarily realised for the chosen monitoring software. Finally, the influence of monitoring activities on the actual tasks of the cluster was of interest. To examine the influence on the CPU and the network, the self-developed plugin as well as a selection of typical monitoring values were used exemplarily. It could be proven that no impact on the productive application for typical monitoring intervals can be expected and only for atypically short intervals a minor influence could be determined. / Die ständige Überwachung eines Computers gehört zu den essentiellen Dingen, die zu tun sind um immer auf dem Laufenden zu sein, wie der aktuelle Zustand des Rechners ist. Dies ist trivial, wenn man direkt davor sitzt, aber wenn man einen Computer aus der Ferne beobachten soll ist dies schon nicht mehr so einfach möglich. Schwieriger wird es dann, wenn es eine große Anzahl an Rechnern zu überwachen gilt. Da der Vorgang der Überwachung auch immer etwas Netzwerklast und Last auf dem zu überwachenden Rechner selber verursacht, ist es wichtig diese Einflüsse so gering wie möglich zu halten. Gerade dann, wenn man viele Computer zu einem leistungsfähigen Cluster zusammen geschalten hat ist es notwendig, dass diese Überwachungslösung möglichst effizient funktioniert und die eigentliche Arbeit des Supercomputers nicht stört. Die Hauptziele dieser Arbeit sind deshalb Analysen zur Sicherstellung der Skalierbarkeit der Überwachungslösung für einen großen Computer Cluster, sowie der praktische Nachweis der Funktionalität dieser. Dazu wurde zuerst eine Einordnung des Monitorings in den Gesamtbetrieb eines großen Computersystems vorgenommen. Danach wurden Methoden und Lösungen aufgezeigt, welche in einem allgemeinen Szenario geeignet sind, um den ganzheitlichen Vorgang der Überwachung möglichst effizient und skalierbar durchzuführen. Im weiteren Verlauf wurde darauf eingegangen welche Lehren aus dem Betrieb eines vorhandenen Clusters für den Betrieb eines neuen, leistungsfähigeren Systems gezogen werden können um dessen Funktion möglichst gut gewährleisten zu können. Darauf aufbauend wurde eine Auswahl getroffen, welche Anwendung aus einer Menge existierende Lösungen heraus, zur Überwachung des neuen Clusters besonders geeignet ist. Dies fand unter Berücksichtigung der spezielle Situation, zum Beispiel der Verwendung von InfiniBand als Verbindungsnetzwerk, statt. Im Zuge dessen wurde eine zusätzliche Software entwickelt, welche die verschiedensten Statusinformationen der InfiniBand Anschlüsse auslesen und verarbeiten kann, unabhängig vom Hersteller der Hardware. Diese Funktionalität, welche im Bereich der freien Überwachungsanwendungen bisher ansonsten noch nicht verfügbar war, wurde beispielhaft für die gewählte Monitoring Software umgesetzt. Letztlich war der Einfluss der Überwachungsaktivitäten auf die eigentlichen Anwendungen des Clusters von Interesse. Dazu wurden exemplarisch das selbst entwickelte Plugin sowie eine Auswahl an typischen Überwachungswerten benutzt, um den Einfluss auf die CPU und das Netzwerk zu untersuchen. Dabei wurde gezeigt, dass für typische Überwachungsintervalle keine Einschränkungen der eigentlichen Anwendung zu erwarten sind und dass überhaupt nur für untypisch kurze Intervalle ein geringer Einfluss festzustellen war.
383

Simulation de la dynamique des dislocations à très grande échelle / Hybrid parallelism on large scale dislocation dynamic simulation

Etcheverry, Arnaud 23 November 2015 (has links)
Le travail réalisé durant cette thèse vise à offrir à un code de simulation en dynamique des dislocations les composantes essentielles pour permettre le passage à l’échelle sur les calculateurs modernes. Nous abordons plusieurs aspects de la simulation numérique avec tout d’abord des considérations algorithmiques. Pour permettre de réaliser des simulations efficaces en terme de complexité algorithmique pour des grandes simulations, nous explorons les contraintes des différentes étapes de la simulation en offrant une analyse et des améliorations aux algorithmes. Ensuite, une considération particulière est apportée aux structures de données. En prenant en compte les nouveaux algorithmes, nous proposons une structure de données pour bénéficier d’accès performants à travers la hiérarchie mémoire. Cette structure est modulaire pour faire face à deux types d’algorithmes, avec d’un côté la gestion du maillage nécessitant une gestion dynamique de la mémoire et de l’autre les phases de calcul intensifs avec des accès rapides. Pour cela cette structure modulaire est complétée par un octree pour gérer la décomposition de domaine et aussi les algorithmes hiérarchiques comme le calcul du champ de contrainte et la détection des collisions. Enfin nous présentons les aspects parallèles du code. Pour cela nous introduisons une approche hybride, avec un parallélisme à grain fin à base de threads, et un parallélisme à gros grain de type MPI nécessitant une décomposition de domaine et un équilibrage de charge.Finalement, ces contributions sont testées pour valider les apports pour la simulation numérique. Deux cas d’étude sont présentés pour observer et analyser le comportement des différentes briques de la simulation. Tout d’abord une simulation extrêmement dynamique, composée de sources de Frank-Read dans un cristal de zirconium est utilisée, avant de présenter quelques résultats sur une simulation cible contenant une forte densité de défauts d’irradiation. / This research work focuses on bringing performances in 3D dislocation dynamics simulation, to run efficiently on modern computers. First of all, we introduce some algorithmic technics, to reduce the complexity in order to target large scale simulations. Second of all, we focus on data structure to take into account both memory hierachie and algorithmic data access. On one side we build this adaptive data structure to handle dynamism of data and on the other side we use an Octree to combine hierachie decompostion and data locality in order to face intensive arithmetics with force field computation and collision detection. Finnaly, we introduce some parallel aspects of our simulation. We propose a classical hybrid parallelism, with task based openMP threads and domain decomposition technics for MPI.
384

Towards better understanding and improving optimization in recurrent neural networks

Kanuparthi, Bhargav 07 1900 (has links)
Recurrent neural networks (RNN) are known for their notorious exploding and vanishing gradient problem (EVGP). This problem becomes more evident in tasks where the information needed to correctly solve them exist over long time scales, because it prevents important gradient components from being back-propagated adequately over a large number of steps. The papers written in this work formalizes gradient propagation in parametric and semi-parametric RNNs to gain a better understanding towards the source of this problem. The first paper introduces a simple stochastic algorithm (h-detach) that is specific to LSTM optimization and targeted towards addressing the EVGP problem. Using this we show significant improvements over vanilla LSTM in terms of convergence speed, robustness to seed and learning rate, and generalization on various benchmark datasets. The next paper focuses on semi-parametric RNNs and self-attentive networks. Self-attention provides a way by which a system can dynamically access past states (stored in memory) which helps in mitigating vanishing of gradients. Although useful, it is difficult to scale as the size of the computational graph grows quadratically with the number of time steps involved. In the paper we describe a relevancy screening mechanism, inspired by the cognitive process of memory consolidation, that allows for a scalable use of sparse self-attention with recurrence while ensuring good gradient propagation. / Les réseaux de neurones récurrents (RNN) sont connus pour leur problème de gradient d'explosion et de disparition notoire (EVGP). Ce problème devient plus évident dans les tâches où les informations nécessaires pour les résoudre correctement existent sur de longues échelles de temps, car il empêche les composants de gradient importants de se propager correctement sur un grand nombre d'étapes. Les articles écrits dans ce travail formalise la propagation du gradient dans les RNN paramétriques et semi-paramétriques pour mieux comprendre la source de ce problème. Le premier article présente un algorithme stochastique simple (h-detach) spécifique à l'optimisation LSTM et visant à résoudre le problème EVGP. En utilisant cela, nous montrons des améliorations significatives par rapport au LSTM vanille en termes de vitesse de convergence, de robustesse au taux d'amorçage et d'apprentissage, et de généralisation sur divers ensembles de données de référence. Le prochain article se concentre sur les RNN semi-paramétriques et les réseaux auto-attentifs. L'auto-attention fournit un moyen par lequel un système peut accéder dynamiquement aux états passés (stockés en mémoire), ce qui aide à atténuer la disparition des gradients. Bien qu'utile, il est difficile à mettre à l'échelle car la taille du graphe de calcul augmente de manière quadratique avec le nombre de pas de temps impliqués. Dans l'article, nous décrivons un mécanisme de criblage de pertinence, inspiré par le processus cognitif de consolidation de la mémoire, qui permet une utilisation évolutive de l'auto-attention clairsemée avec récurrence tout en assurant une bonne propagation du gradient.
385

Pokročilá statická analýza atomičnosti v paralelních programech v prostředí Facebook Infer / Advanced Static Analysis of Atomicity in Concurrent Programs through Facebook Infer

Harmim, Dominik January 2021 (has links)
Nástroj Atomer je statický analyzátor založený na myšlence, že pokud jsou některé sekvence funkcí vícevláknového programu prováděny v některých bězích pod zámky, je pravděpodobně zamýšleno, že mají být vždy provedeny atomicky. Analyzátor Atomer se tudíž snaží takové sekvence hledat a poté zjišťovat, pro které z nich může být v některých jiných bězích programu porušena atomicita. Autor této diplomové práce ve své bakalářské práci navrhl a implementoval první verzi nástroje Atomer jako zásuvný modul aplikačního rámce Facebook Infer. V této diplomové práci je navržena nová a výrazně vylepšená verze analyzátoru Atomer. Cílem vylepšení je zvýšení jak škálovatelnosti, tak přesnosti. Kromě toho byla přidána podpora pro několik původně nepodporovaných programovacích vlastností (včetně např. možnosti analyzovat programy napsané v jazycích C++ a Java nebo podpory pro reentrantní zámky nebo stráže zámků, tzv. "lock guards"). Prostřednictvím řady experimentů (včetně experimentů s reálnými programy a reálnými chybami) se ukázalo, že nová verze nástroje Atomer je skutečně mnohem obecnější, přesnější a lépe škáluje.
386

Mobilní systém pro sběr zpětné vazby zákazníků / Mobile System for Customer Feedback Collection

Kadlubiec, Jakub January 2013 (has links)
Práce se zabývá popisem tvorby mobilního systému pro monitoring zákaznické spokojenosti a sběr zpětné vazby od návštěvníků v restauracích s názvem Huerate. Komplexně jsou popsané všechny fáze vývoje systému. První část práce se zabývá analýzou existujících řešení a stavem na trhu. Následně jsou na základně komunikace s majiteli restaurací sestaveny požadavky na systém. Nakonec se práce věnuje samotnému návrhu systému, jeho implementaci a nasazení v restauracích. Systém Huerate běží jako webová aplikace a je dostupný na adrese http://huerate.cz.
387

Monitoring of large-scale Cluster Computers

Worm, Stefan 12 February 2007 (has links)
The constant monitoring of a computer is one of the essentials to be up-to-date about its state. This may seem trivial if one is sitting right in front of it but when monitoring a computer from a certain distance it is not as simple anymore. It gets even more difficult if a large number of computers need to be monitored. Because the process of monitoring always causes some load on the network and the monitored computer itself, it is important to keep these influences as low as possible. Especially for a high-performance cluster that was built from a lot of computers, it is necessary that the monitoring approach works as efficiently as possible and does not influence the actual operations of the supercomputer. Thus, the main goals of this work were, first of all, analyses to ensure the scalability of the monitoring solution for a large computer cluster as well as to prove the functionality of it in practise. To achieve this, a classification of monitoring activities in terms of the overall operation of a large computer system was accomplished first. Thereafter, methods and solutions were presented which are suitable for a general scenario to execute the process of monitoring as efficient and scalable as possible. During the course of this work, conclusions from the operation of an existing cluster for the operation of a new, more powerful system were drawn to ensure its functionality as good as possible. Consequently, a selection of applications from an existing pool of solutions was made to find one that is most suitable for the monitoring of the new cluster. The selection took place considering the special situation of the system like the usage of InfiniBand as the network interconnect. Further on, an additional software was developed which can read and process the different status information of the InfiniBand ports, unaffected by the vendor of the hardware. This functionality, which so far had not been available in free monitoring applications, was exemplarily realised for the chosen monitoring software. Finally, the influence of monitoring activities on the actual tasks of the cluster was of interest. To examine the influence on the CPU and the network, the self-developed plugin as well as a selection of typical monitoring values were used exemplarily. It could be proven that no impact on the productive application for typical monitoring intervals can be expected and only for atypically short intervals a minor influence could be determined. / Die ständige Überwachung eines Computers gehört zu den essentiellen Dingen, die zu tun sind um immer auf dem Laufenden zu sein, wie der aktuelle Zustand des Rechners ist. Dies ist trivial, wenn man direkt davor sitzt, aber wenn man einen Computer aus der Ferne beobachten soll ist dies schon nicht mehr so einfach möglich. Schwieriger wird es dann, wenn es eine große Anzahl an Rechnern zu überwachen gilt. Da der Vorgang der Überwachung auch immer etwas Netzwerklast und Last auf dem zu überwachenden Rechner selber verursacht, ist es wichtig diese Einflüsse so gering wie möglich zu halten. Gerade dann, wenn man viele Computer zu einem leistungsfähigen Cluster zusammen geschalten hat ist es notwendig, dass diese Überwachungslösung möglichst effizient funktioniert und die eigentliche Arbeit des Supercomputers nicht stört. Die Hauptziele dieser Arbeit sind deshalb Analysen zur Sicherstellung der Skalierbarkeit der Überwachungslösung für einen großen Computer Cluster, sowie der praktische Nachweis der Funktionalität dieser. Dazu wurde zuerst eine Einordnung des Monitorings in den Gesamtbetrieb eines großen Computersystems vorgenommen. Danach wurden Methoden und Lösungen aufgezeigt, welche in einem allgemeinen Szenario geeignet sind, um den ganzheitlichen Vorgang der Überwachung möglichst effizient und skalierbar durchzuführen. Im weiteren Verlauf wurde darauf eingegangen welche Lehren aus dem Betrieb eines vorhandenen Clusters für den Betrieb eines neuen, leistungsfähigeren Systems gezogen werden können um dessen Funktion möglichst gut gewährleisten zu können. Darauf aufbauend wurde eine Auswahl getroffen, welche Anwendung aus einer Menge existierende Lösungen heraus, zur Überwachung des neuen Clusters besonders geeignet ist. Dies fand unter Berücksichtigung der spezielle Situation, zum Beispiel der Verwendung von InfiniBand als Verbindungsnetzwerk, statt. Im Zuge dessen wurde eine zusätzliche Software entwickelt, welche die verschiedensten Statusinformationen der InfiniBand Anschlüsse auslesen und verarbeiten kann, unabhängig vom Hersteller der Hardware. Diese Funktionalität, welche im Bereich der freien Überwachungsanwendungen bisher ansonsten noch nicht verfügbar war, wurde beispielhaft für die gewählte Monitoring Software umgesetzt. Letztlich war der Einfluss der Überwachungsaktivitäten auf die eigentlichen Anwendungen des Clusters von Interesse. Dazu wurden exemplarisch das selbst entwickelte Plugin sowie eine Auswahl an typischen Überwachungswerten benutzt, um den Einfluss auf die CPU und das Netzwerk zu untersuchen. Dabei wurde gezeigt, dass für typische Überwachungsintervalle keine Einschränkungen der eigentlichen Anwendung zu erwarten sind und dass überhaupt nur für untypisch kurze Intervalle ein geringer Einfluss festzustellen war.
388

IMPLEMENTING NETCONF AND YANG ON CUSTOM EMBEDDED SYSTEMS

Georges, Krister, Jahnstedt, Per January 2023 (has links)
Simple Network Management Protocol (SNMP) has been the traditional approach for configuring and monitoring network devices, but its limitations in security and automation have driven the exploration of alternative solutions. The Network Configuration Protocol (NETCONF) and Yet Another Next Generation (YANG) data modeling language significantly improve security and automation capabilities. This thesis aims to investigate the feasibility of implementing a NETCONF server on the Anybus CompactCom (ABCC) Industrial Internet of Things (IIoT) Security module, an embedded device with limited processing power and memory, running on a custom operating system, and using open source projects with MbedTLS as the cryptographic primitive library. The project will assess implementing a YANG model to describe the ABCC’s configurable interface, connecting with a NETCONF client to exchange capabilities, monitoring specific attributes or interfaces on the device, and invoking remote procedure call (RPC) commands to configure the ABCC settings. The goal is to provide a proof of concept and contribute to the growing trend of adopting NETCONF and YANG in the industry, particularly for the Industrial Internet of Things (IIoT) platform of Hardware Meets Software (HMS).
389

Cloud Computing and the GLAM sector : A case study of the new Digital Archive Project of Åland Maritime Museum.

Faruqi, Ubaid Ali January 2023 (has links)
This thesis examines the benefits and drawbacks of cloud computing technology within the GLAM (Galleries, Libraries, Archives, and Museums) sector of Sweden and Finland. It employs the case study of the recently developed and launched Digital Archive Project at Åland Maritime Museum which leveraged the Amazon Web Services (AWS) technology stack to provide a cloud-based digital platform for the museum's archival materials. The primary objective of this study is to understand the interaction, usage, and suitability of cloud computing technologies and the impact of User Experience (UX) (primary users being the GLAM professionals) on digitalization efforts. This study analyzes eight GLAM institutions in Sweden and Finland using semi-structured interviews and compares the trust and readiness of adapting to private cloud service providers. The findings reveal that Finland has a more ‘aggressive’ and experimental approach to newer technologies such as cloud computing tools, compared to Sweden. In Sweden, there is an appreciation for pleasant UX and methods to make heritage material more accessible, but there is also a lot of hesitation due to the data privacy regulations in the aftermath of the Schrems II Judgment and the invalidation of the EU-U.S. Privacy Shield Agreement. The study concludes that AWS as a cloud provider is difficult to incorporate in the public sector GLAM institutions compared to the private sector. The study also provides practical recommendations for GLAM institutions and professionals and calls for further interdisciplinary research with Digital Humanists at the center of it.
390

SurvSec Security Architecture for Reliable Surveillance WSN Recovery from Base Station Failure

Megahed, Mohamed Helmy Mostafa 30 May 2014 (has links)
Surveillance wireless sensor networks (WSNs) are highly vulnerable to the failure of the base station (BS) because attackers can easily render the network useless for relatively long periods of time by only destroying the BS. The time and effort needed to destroy the BS is much less than that needed to destroy the numerous sensing nodes. Previous works have tackled BS failure by deploying a mobile BS or by using multiple BSs, which requires extra cost. Moreover, despite using the best electronic countermeasures, intrusion tolerance systems and anti-traffic analysis strategies to protect the BSs, an adversary can still destroy them. The new BS cannot trust the deployed sensor nodes. Also, previous works lack both the procedures to ensure network reliability and security during BS failure such as storing then sending reports concerning security threats against nodes to the new BS and the procedures to verify the trustworthiness of the deployed sensing nodes. Otherwise, a new WSN must be re-deployed which involves a high cost and requires time for the deployment and setup of the new WSN. In this thesis, we address the problem of reliable recovery from a BS failure by proposing a new security architecture called Surveillance Security (SurvSec). SurvSec continuously monitors the network for security threats and stores data related to node security, detects and authenticates the new BS, and recovers the stored data at the new BS. SurvSec includes encryption for security-related information using an efficient dynamic secret sharing algorithm, where previous work has high computations for dynamic secret sharing. SurvSec includes compromised nodes detection protocol against collaborative work of attackers working at the same time where previous works have been inefficient against collaborative work of attackers working at the same time. SurvSec includes a key management scheme for homogenous WSN, where previous works assume heterogeneous WSN using High-end Sensor Nodes (HSN) which are the best target for the attackers. SurvSec includes efficient encryption architecture against quantum computers with a low time delay for encryption and decryption, where previous works have had high time delay to encrypt and decrypt large data size, where AES-256 has 14 rounds and high delay. SurvSec consists of five components, which are: 1. A Hierarchical Data Storage and Data Recovery System. 2. Security for the Stored Data using a new dynamic secret sharing algorithm. 3. A Compromised-Nodes Detection Algorithm at the first stage. 4. A Hybrid and Dynamic Key Management scheme for homogenous network. 5. Powerful Encryption Architecture for post-quantum computers with low time delay. In this thesis, we introduce six new contributions which are the followings: 1. The development of the new security architecture called Surveillance Security (SurvSec) based on distributed Security Managers (SMs) to enable distributed network security and distributed secure storage. 2. The design of a new dynamic secret sharing algorithm to secure the stored data by using distributed users tables. 3. A new algorithm to detect compromised nodes at the first stage, when a group of attackers capture many legitimate nodes after the base station destruction. This algorithm is designed to be resistant against a group of attackers working at the same time to compromise many legitimate nodes during the base station failure. 4. A hybrid and dynamic key management scheme for homogenous network which is called certificates shared verification key management. 5. A new encryption architecture which is called the spread spectrum encryption architecture SSEA to resist quantum-computers attacks. 6. Hardware implementation of reliable network recovery from BS failure. The description of the new security architecture SurvSec components is done followed by a simulation and analytical study of the proposed solutions to show its performance.

Page generated in 0.0404 seconds