Spelling suggestions: "subject:"accesscontrol"" "subject:"accesskontroll""
371 |
MAC and Physical Layer Design for Ultra-Wideband CommunicationsKumar, Nishant 25 May 2004 (has links)
Ultra-Wideband has recently gained great interest for high-speed short-range communications (e.g. home networking applications) as well as low-speed long-range communications (e.g. sensor network applications). Two flavors of UWB have recently emerged as strong contenders for the technology. One is based on Impulse Radio techniques extended to direct sequence spread spectrum. The other technique is based on Orthogonal Frequency Division Multiplexing. Both schemes are analyzed in this thesis and modifications are proposed to increase the performance of each system. For both schemes, the issue of simultaneously operating users has been investigated.
Current MAC design for UWB has relied heavily on existing MAC architectures in order to maintain backward compatibility. It remains to be seen if the existing MACs adequately support the UWB PHY (Physical) layer for the applications envisioned for UWB. Thus, in this work we propose a new MAC scheme for an Impulse Radio based UWB PHY, which is based on a CDMA approach using a code-broker in a piconet architecture. The performance of the proposed scheme is compared with the traditional CSMA scheme as well as the receiver-based code assignment scheme.
A new scheme is proposed to increase the overall performance of the Multiband-OFDM system. Two schemes proposed to increase the performance of the system in the presence of simultaneously operating piconets (namely Half Pulse Repetition Frequency and Time spreading) are studied. The advantages/disadvantages of both of the schemes are discussed. / Master of Science
|
372 |
Mitigation of network tampering using dynamic dispatch of mobile agentsRocke, Adam Jay 01 January 2004 (has links)
No description available.
|
373 |
Analysis of the MAC protocol in low rate wireless personal area networks with bursty ON-OFF trafficGao, J.L., Hu, J., Min, Geyong, Xu, L. January 2013 (has links)
No / Supported by the IEEE 802.15.4 standard, embedded sensor networks have become popular and been widely deployed in recent years. The IEEE 802.15.4 medium access control (MAC) protocol is uniquely designed to meet the desirable requirements of the low end-to-end delay, low packet loss, and low power consumption in the low rate wireless personal areas networks (LR-WPANs). This paper develops an analytical model to quantify the key performance metrics of the MAC protocol in LR-WPANs with bursty ONOFF traffic. This study fills the gap in the literature by removing the assumptions of saturated traffic or nonbursty unsaturated traffic conditions, which are unable to capture the characteristics of bursty multimedia traffic in sensor networks. This analytical model can be used to derive the QoS performance metrics in terms of throughput and total delay. The accuracy of the model is verified through NS-2 (http://www.isi.edu/nsnam/ns/) simulation experiments. This model is adopted to investigate the performance of the MAC protocol in LR-WPANs under various traffic patterns, different loads, and various numbers of stations. Numerical results show that the traffic patterns and traffic burstiness have a significant impact on the delay performance of LR-WPANs.
|
374 |
Segmentering av lokala nätverk - För mikro- och småorganisationerHermansson, Christopher, Johansson, Sebastian January 2010 (has links)
<p>Syftet med den här rapporten är att beskriva ett antal olika tillvägagångssätt man kan använda sig av då man har behov av att dela in ett lokalt nätverk i olika segment och med det även kunna reglera trafikflödet mellan segmenten. De lösningar som presenteras i arbetet är inriktade mot mikro- och småföretag.Anledningen till att vi har valt att arbeta med det här området är att vi anser att det är viktigt för organisationer att har en strukturerad och segmenterad design på sitt interna datornätverk.Vi har arbetat genom att i förväg samla in information om olika tekniker som kan tänkas lösa vårt problem, och därefter testat olika scenarion med dessa tekniker. Data har samlats in efter varje genomfört scenario och sammanställts i statistisk form för att kunna avgöra vilken metod som var att föredra.Vi har testat lösningar där man segmenterar nätverket i en lager 2-switch medan man möjliggör och förhindrar trafikflöde mellan segmenten i en router. Även lösningar där man använder en lager 3-switch har testats. På så sätt kan routningen ske direkt i switchen och det blir betydligt mindre belastning i routern. Resultatet visar att då man vill segmentera ett nätverk så är det rekommenderat att man använder sig av VLAN och ACL:er och eventuellt i kombination med en brandvägg.Slutresultatet av rapporten är att en lösning med ”router on a stick” är den billigaste lösningen och troligen den som de flesta mindre företag skulle klara sig med. Vilken lösning man väljer beror dock helt på hur mycket pengar man vill lägga på sitt nätverk samt vad kraven är.</p> / <p>The purpose of this report is to describe a number of approaches that can be used when you are in need of dividing a local area network in a number of segments, and with that also be able to control how data traffic is allowed to traverse between the different segments. The solutions that are presented are focused towards micro and small companies.The reason that we have chosen to work with this matter is that we believe it is important for organizations to have a structured and segmented design of its internal computer network.We have been working by in advance collecting information about various techniques that might solve our problem, and then testing different scenarios using these techniques. Data have been collected after each tested scenario and compiled in statistical form in order to determine which method that was preferable.We have been testing solutions were you segment the network in a layer 2 switch while you allow or deny communication between the segments in a router, and also solutions were you use a layer 3 switch. In that way you can let the routing be performed in the switch, which leads to significantly lower load on the router. The result was that if you are about to segment a local area network it is recommended that you use VLAN and ACL:s, and possibly in combination with a firewall.The final result of this report is that a solution using the “router on a stick”-technique is the cheapest one, and probably the one that most small companies would get along with. However, the solution that you choose depends completely on how much money you want to spend on your network, and also what the needs are.</p>
|
375 |
MiniCA: A web-based certificate authorityMacdonell, James Patrick 01 January 2007 (has links)
The MiniCA project is proposed and developed to address growing demand for inexpensive access to security features such as privacy, strong authentication, and digital signatures. These features are integral to public-key encryption technologies. The audience for whom the software project is intended includes, technical staff requiring certificates for use in SSL applications (i.e. a secure web-site) at California State University, San Bernardino.
|
376 |
Privacy enforcement with data owner-defined policiesScheffler, Thomas January 2013 (has links)
This thesis proposes a privacy protection framework for the controlled distribution and use of personal private data. The framework is based on the idea that privacy policies can be set directly by the data owner and can be automatically enforced against the data user.
Data privacy continues to be a very important topic, as our dependency on electronic communication maintains its current growth, and private data is shared between multiple devices, users and locations. The growing amount and the ubiquitous availability of personal private data increases the likelihood of data misuse.
Early privacy protection techniques, such as anonymous email and payment systems have focused on data avoidance and anonymous use of services. They did not take into account that data sharing cannot be avoided when people participate in electronic communication scenarios that involve social interactions. This leads to a situation where data is shared widely and uncontrollably and in most cases the data owner has no control over further distribution and use of personal private data.
Previous efforts to integrate privacy awareness into data processing workflows have focused on the extension of existing access control frameworks with privacy aware functions or have analysed specific individual problems such as the expressiveness of policy languages. So far, very few implementations of integrated privacy protection mechanisms exist and can be studied to prove their effectiveness for privacy protection. Second level issues that stem from practical application of the implemented mechanisms, such as usability, life-time data management and changes in trustworthiness have received very little attention so far, mainly because they require actual implementations to be studied.
Most existing privacy protection schemes silently assume that it is the privilege of the data user to define the contract under which personal private data is released. Such an approach simplifies policy management and policy enforcement for the data user, but leaves the data owner with a binary decision to submit or withhold his or her personal data based on the provided policy.
We wanted to empower the data owner to express his or her privacy preferences through privacy policies that follow the so-called Owner-Retained Access Control (ORAC) model. ORAC has been proposed by McCollum, et al. as an alternate access control mechanism that leaves the authority over access decisions by the originator of the data.
The data owner is given control over the release policy for his or her personal data, and he or she can set permissions or restrictions according to individually perceived trust values. Such a policy needs to be expressed in a coherent way and must allow the deterministic policy evaluation by different entities.
The privacy policy also needs to be communicated from the data owner to the data user, so that it can be enforced. Data and policy are stored together as a Protected Data Object that follows the Sticky Policy paradigm as defined by Mont, et al. and others.
We developed a unique policy combination approach that takes usability aspects for the creation and maintenance of policies into consideration. Our privacy policy consists of three parts: A Default Policy provides basic privacy protection if no specific rules have been entered by the data owner. An Owner Policy part allows the customisation of the default policy by the data owner. And a so-called Safety Policy guarantees that the data owner cannot specify disadvantageous policies, which, for example, exclude him or her from further access to the private data. The combined evaluation of these three policy-parts yields the necessary access decision.
The automatic enforcement of privacy policies in our protection framework is supported by a reference monitor implementation. We started our work with the development of a client-side protection mechanism that allows the enforcement of data-use restrictions after private data has been released to the data user. The client-side enforcement component for data-use policies is based on a modified Java Security Framework. Privacy policies are translated into corresponding Java permissions that can be automatically enforced by the Java Security Manager.
When we later extended our work to implement server-side protection mechanisms, we found several drawbacks for the privacy enforcement through the Java Security Framework. We solved this problem by extending our reference monitor design to use Aspect-Oriented Programming (AOP) and the Java Reflection API to intercept data accesses in existing applications and provide a way to enforce data owner-defined privacy policies for business applications. / Im Rahmen der Dissertation wurde ein Framework für die Durchsetzung von Richtlinien zum Schutz privater Daten geschaffen, welches darauf setzt, dass diese Richtlinien oder Policies direkt von den Eigentümern der Daten erstellt werden und automatisiert durchsetzbar sind.
Der Schutz privater Daten ist ein sehr wichtiges Thema im Bereich der elektronischen Kommunikation, welches durch die fortschreitende Gerätevernetzung und die Verfügbarkeit und Nutzung privater Daten in Onlinediensten noch an Bedeutung gewinnt.
In der Vergangenheit wurden verschiedene Techniken für den Schutz privater Daten entwickelt: so genannte Privacy Enhancing Technologies. Viele dieser Technologien arbeiten nach dem Prinzip der Datensparsamkeit und der Anonymisierung und stehen damit der modernen Netznutzung in Sozialen Medien entgegen. Das führt zu der Situation, dass private Daten umfassend verteilt und genutzt werden, ohne dass der Datenbesitzer gezielte Kontrolle über die Verteilung und Nutzung seiner privaten Daten ausüben kann.
Existierende richtlinienbasiert Datenschutztechniken gehen in der Regel davon aus, dass der Nutzer und nicht der Eigentümer der Daten die Richtlinien für den Umgang mit privaten Daten vorgibt. Dieser Ansatz vereinfacht das Management und die Durchsetzung der Zugriffsbeschränkungen für den Datennutzer, lässt dem Datenbesitzer aber nur die Alternative den Richtlinien des Datennutzers zuzustimmen, oder keine Daten weiterzugeben.
Es war daher unser Ansatz die Interessen des Datenbesitzers durch die Möglichkeit der Formulierung eigener Richtlinien zu stärken. Das dabei verwendete Modell zur Zugriffskontrolle wird auch als Owner-Retained Access Control (ORAC) bezeichnet und wurde 1990 von McCollum u.a. formuliert. Das Grundprinzip dieses Modells besteht darin, dass die Autorität über Zugriffsentscheidungen stets beim Urheber der Daten verbleibt.
Aus diesem Ansatz ergeben sich zwei Herausforderungen. Zum einen muss der Besitzer der Daten, der Data Owner, in die Lage versetzt werden, aussagekräftige und korrekte Richtlinien für den Umgang mit seinen Daten formulieren zu können. Da es sich dabei um normale Computernutzer handelt, muss davon ausgegangen werden, dass diese Personen auch Fehler bei der Richtlinienerstellung machen.
Wir haben dieses Problem dadurch gelöst, dass wir die Datenschutzrichtlinien in drei separate Bereiche mit unterschiedlicher Priorität aufteilen. Der Bereich mit der niedrigsten Priorität definiert grundlegende Schutzeigenschaften. Der Dateneigentümer kann diese Eigenschaften durch eigene Regeln mittlerer Priorität überschrieben. Darüber hinaus sorgt ein Bereich mit Sicherheitsrichtlinien hoher Priorität dafür, dass bestimmte Zugriffsrechte immer gewahrt bleiben.
Die zweite Herausforderung besteht in der gezielten Kommunikation der Richtlinien und deren Durchsetzung gegenüber dem Datennutzer (auch als Data User bezeichnet).
Um die Richtlinien dem Datennutzer bekannt zu machen, verwenden wir so genannte Sticky Policies. Das bedeutet, dass wir die Richtlinien über eine geeignete Kodierung an die zu schützenden Daten anhängen, so dass jederzeit darauf Bezug genommen werden kann und auch bei der Verteilung der Daten die Datenschutzanforderungen der Besitzer erhalten bleiben.
Für die Durchsetzung der Richtlinien auf dem System des Datennutzers haben wir zwei verschiedene Ansätze entwickelt. Wir haben einen so genannten Reference Monitor entwickelt, welcher jeglichen Zugriff auf die privaten Daten kontrolliert und anhand der in der Sticky Policy gespeicherten Regeln entscheidet, ob der Datennutzer den Zugriff auf diese Daten erhält oder nicht.
Dieser Reference Monitor wurde zum einen als Client-seitigen Lösung implementiert, die auf dem Sicherheitskonzept der Programmiersprache Java aufsetzt. Zum anderen wurde auch eine Lösung für Server entwickelt, welche mit Hilfe der Aspekt-orientierten Programmierung den Zugriff auf bestimmte Methoden eines Programms kontrollieren kann.
In dem Client-seitigen Referenzmonitor werden Privacy Policies in Java Permissions übersetzt und automatisiert durch den Java Security Manager gegenüber beliebigen Applikationen durchgesetzt. Da dieser Ansatz beim Zugriff auf Daten mit anderer Privacy Policy den Neustart der Applikation erfordert, wurde für den Server-seitigen Referenzmonitor ein anderer Ansatz gewählt. Mit Hilfe der Java Reflection API und Methoden der Aspektorientierten Programmierung gelang es Datenzugriffe in existierenden Applikationen abzufangen und erst nach Prüfung der Datenschutzrichtlinie den Zugriff zuzulassen oder zu verbieten. Beide Lösungen wurden auf ihre Leistungsfähigkeit getestet und stellen eine Erweiterung der bisher bekannten Techniken zum Schutz privater Daten dar.
|
377 |
Protocoles de routage sans connaissance de voisinage pour réseaux radio multi-sauts / Beacon-less geographic routing for multihop wireless sensor networksAmadou, Ibrahim 06 September 2012 (has links)
L'efficacité énergétique constitue l'objectif clef pour la conception des protocoles de communication pour des réseaux de capteurs radio multi-sauts. Beaucoup d'efforts ont été réalisés à différents niveaux de la pile protocolaire à travers des algorithmes d'agrégation spatiale et temporelle des données, des protocoles de routage efficaces en énergie, et des couches d'accès au médium avec des mécanismes d'ordonnancement permettant de mettre la radio en état d'endormissement afin d'économiser l'énergie. Pour autant, ces protocoles utilisent de façon importante des paquets de contrôle et de découverte du voisinage qui sont coûteux en énergie. En outre, cela se fait très souvent sans aucune interaction entre les différentes couches de la pile. Ces travaux de thèse s'intéressent donc particulièrement à la problématique de l'énergie des réseaux de capteurs à travers des protocoles de routage et d'accès au médium. Les contributions de cette thèse se résument de la manière suivante : Nous nous sommes tout d'abord intéressés à la problématique de l'énergie au niveau routage. Dans cette partie, les contributions se subdivisent en deux parties. Dans un premier temps, nous avons proposé une analyse théorique de la consommation d'énergie des protocoles de routage des réseaux radio multi-sauts d'appréhender au mieux les avantages et les inconvénients des uns et des autres en présence des modèles de trafic variables, un diamètre du réseau variable également et un modèle radio qui permet de modéliser les erreurs de réception des paquets. À l'issue de cette première étude, nous sommes parvenus à la conclusion que pour être économe en énergie, un protocole de routage doit avoir des approches similaires à celle des protocoles de routage géographique sans message hello. Puis, dans un second temps, nous introduisons une étude de l'influence des stratégies de relayage dans un voisinage à 1 saut sur les métriques de performance comme le taux de livraison, le nombre de messages dupliqués et la consommation d'énergie. Cette étude est suivie par une première proposition de protocole de routage géographique sans message hello (Pizza-Forwarding (PF)) exploitant des zones de relayage optimisées et sans aucune hypothèse sur les propriétés du canal radio. Dans le but de réduire considérablement la consommation de PF, nous proposons de le combiner avec une adaptation d'un protocole MAC asynchrone efficace en énergie à travers une approche transversale. La combinaison de ces deux approches montre un gain significatif en terme d'économie d'énergie avec des très bon taux de livraison et cela quels que soient les scénarios et la nature de la topologique. / Energy-efficient communication protocol is a primary design goal for Wireless Sensor Networks (WSNs). Many efforts have been done to save energy anywhere in the protocol stack through temporal and spatial data aggregation schemes, energy-aware routing protocols, activity scheduling and energy-efficient MAC protocols with duty cycle. However both control packets and beacons remain which induces a huge waste energy. Moreover, their design follows the classical layered approach with the principle of modularity in system development, which can lead to a poor performance in WSNs. This thesis focuses on the issues of energy in WSNs through energy-efficient routing and medium access control protocols. The constributions of this thesis can be summarized as follows: First, we are interested on the energy issues at the routing layer for multihop wireless sensor networks (WSNs). We propose a mathematical framework to model and analyze the energy consumption of routing protocols in multihop WSNs by taking into account the protocol parameters, the traffic pattern and the network characteristics defined by the medium channel properties, the dynamic topology behavior, the network diameter and the node density. In this study, we show that Beacon-less routing protocol should be a best candidate to save energy in WSNs. We investigate the performance of some existing relay selection schemes which are used by Beacon-less routing protocols. Extensive simulations are proposed to evaluate their performance locally in terms of packet delivery ratio, duplicated packet and delay. Then, we extend the work in multihop wiriless networks and develop an optimal solution, Enhanced Nearest Forwarding within Radius, which tries to minimize the per-hop expected number of retranmissions in order to save energy. We present a new beaconless routing protocol called Pizza-Forwarding (PF) without any assumption on the radio environment: neither the radio range nor symmetric radio links nor radio properties (shadowing, etc.) are assumed or restricted. A classical greedy mode is proposed. To overcome the hole problem, packets are forwarded to an optimal node in the two hop neighbor following a reactive and optimized neighborhood discovery. In order to save energy due to idle listening and overhearing, we propose to combine PF's main concepts with an energy-efficient MAC protocol to provide a joint MAC/routing protocol suitable for a real radio environment. Performance results lead to conclude to the powerful behavior of PFMAC.
|
378 |
Proposition et vérification formelle de protocoles de communications temps-réel pour les réseaux de capteurs sans fil / Proposition and formal verification of real-time wireless sensor networks protocolsMouradian, Alexandre 18 November 2013 (has links)
Les RCsF sont des réseaux ad hoc, sans fil, large échelle déployés pour mesurer des paramètres de l'environnement et remonter les informations à un ou plusieurs emplacements (nommés puits). Les éléments qui composent le réseau sont de petits équipements électroniques qui ont de faibles capacités en termes de mémoire et de calcul ; et fonctionnent sur batterie. Ces caractéristiques font que les protocoles développés, dans la littérature scientifique de ces dernières années, visent principalement à auto-organiser le réseau et à réduire la consommation d'énergie. Avec l'apparition d'applications critiques pour les réseaux de capteurs sans fil, de nouveau besoins émergent, comme le respect de bornes temporelles et de fiabilité. En effet, les applications critiques sont des applications dont dépendent des vies humaines ou l'environnement, un mauvais fonctionnement peut donc avoir des conséquences catastrophiques. Nous nous intéressons spécifiquement aux applications de détection d'événements et à la remontée d'alarmes (détection de feu de forêt, d'intrusion, etc), ces applications ont des contraintes temporelles strictes. D'une part, dans la littérature, on trouve peu de protocoles qui permettent d'assurer des délais de bout en bout bornés. Parmi les propositions, on trouve des protocoles qui permettent effectivement de respecter des contraintes temporelles mais qui ne prennent pas en compte les spécificités des RCsF (énergie, large échelle, etc). D'autres propositions prennent en compte ces aspects, mais ne permettent pas de garantir des bornes temporelles. D'autre part, les applications critiques nécessitent un niveau de confiance très élevé, dans ce contexte les tests et simulations ne suffisent pas, il faut être capable de fournir des preuves formelles du respect des spécifications. A notre connaissance cet aspect est très peu étudié pour les RcsF. Nos contributions sont donc de deux types : * Nous proposons un protocole de remontée d'alarmes, en temps borné, X-layer (MAC/routage, nommé RTXP) basé sur un système de coordonnées virtuelles originales permettant de discriminer le 2-voisinage. L'exploitation de ces coordonnées permet d'introduire du déterminisme et de construire un gradient visant à contraindre le nombre maximum de sauts depuis toute source vers le puits. Nous proposons par ailleurs un mécanisme d'agrégation temps-réel des alarmes remontées pour lutter contre les tempêtes de détection qui entraînent congestion et collision, et donc limitent la fiabilité du système. * Nous proposons une méthodologie de vérification formelle basée sur les techniques de Model Checking. Cette méthodologie se déroule en trois points, qui visent à modéliser de manière efficace la nature diffusante des réseaux sans fil, vérifier les RCsF en prenant en compte la non-fiabilité du lien radio et permettre le passage à l'échelle de la vérification en mixant Network Calculus et Model Checking. Nous appliquons ensuite cette méthodologie pour vérifier RTXP. / Wireless Sensor Networks (WSNs) are ad hoc wireless large scale networks deployed in order to monitor physical parameters of the environment and report the measurements to one or more nodes of the network (called sinks). The small electronic devices which compose the network have low computing and memory capacities and run on batteries, researches in this field have thus focused mostly on self-organization and energy consumption reduction aspects. Nevertheless, critical applications for WSNs are emerging and require more than those aspects, they have real-time and reliability requirements. Critical applications are applications on which depend human lives and the environment, a failure of a critical application can thus have dramatic consequences. We are especially interested in anomaly detection applications (forest fire detection, landslide detection, intrusion detection, etc), which require bounded end to end delays and high delivery ratio. Few WSNs protocols of the literature allow to bound end to end delays. Among the proposed solutions, some allow to effectively bound the end to end delays, but do not take into account the characteristics of WSNs (limited energy, large scale, etc). Others, take into account those aspects, but do not give strict guaranties on the end to end delays. Moreover, critical applications require a very high confidence level, simulations and tests are not sufficient in this context, formal proofs of compliance with the specifications of the application have to be provided. The application of formal methods to WSNs is still an open problem. Our contributions are thus twofold : * We propose a real-time cross-layer protocol for WSNs (named RTXP) based on a virtual coordinate system which allows to discriminate nodes in a 2-hop neighborhood. Thanks to these coordinates it is possible to introduce determinism in the accesses to the medium and to bound the hop-count, this allows to bound the end to end delay. Besides, we propose a real-time aggregation scheme to mitigate the alarm storm problem which causes collisions and congestion and thus limit the network lifetime. * We propose a formal verification methodology based on the Model Checking technique. This methodology is composed of three elements, (1) an efficient modeling of the broadcast nature of wireless networks, (2) a verification technique which takes into account the unreliability of the wireless link and (3) a verification technique which mixes Network Calculus and Model Checking in order to be both scalable and exhaustive. We apply this methodology in order to formally verify our proposition, RTXP.
|
379 |
Segmentering av lokala nätverk - För mikro- och småorganisationerHermansson, Christopher, Johansson, Sebastian January 2010 (has links)
Syftet med den här rapporten är att beskriva ett antal olika tillvägagångssätt man kan använda sig av då man har behov av att dela in ett lokalt nätverk i olika segment och med det även kunna reglera trafikflödet mellan segmenten. De lösningar som presenteras i arbetet är inriktade mot mikro- och småföretag.Anledningen till att vi har valt att arbeta med det här området är att vi anser att det är viktigt för organisationer att har en strukturerad och segmenterad design på sitt interna datornätverk.Vi har arbetat genom att i förväg samla in information om olika tekniker som kan tänkas lösa vårt problem, och därefter testat olika scenarion med dessa tekniker. Data har samlats in efter varje genomfört scenario och sammanställts i statistisk form för att kunna avgöra vilken metod som var att föredra.Vi har testat lösningar där man segmenterar nätverket i en lager 2-switch medan man möjliggör och förhindrar trafikflöde mellan segmenten i en router. Även lösningar där man använder en lager 3-switch har testats. På så sätt kan routningen ske direkt i switchen och det blir betydligt mindre belastning i routern. Resultatet visar att då man vill segmentera ett nätverk så är det rekommenderat att man använder sig av VLAN och ACL:er och eventuellt i kombination med en brandvägg.Slutresultatet av rapporten är att en lösning med ”router on a stick” är den billigaste lösningen och troligen den som de flesta mindre företag skulle klara sig med. Vilken lösning man väljer beror dock helt på hur mycket pengar man vill lägga på sitt nätverk samt vad kraven är. / The purpose of this report is to describe a number of approaches that can be used when you are in need of dividing a local area network in a number of segments, and with that also be able to control how data traffic is allowed to traverse between the different segments. The solutions that are presented are focused towards micro and small companies.The reason that we have chosen to work with this matter is that we believe it is important for organizations to have a structured and segmented design of its internal computer network.We have been working by in advance collecting information about various techniques that might solve our problem, and then testing different scenarios using these techniques. Data have been collected after each tested scenario and compiled in statistical form in order to determine which method that was preferable.We have been testing solutions were you segment the network in a layer 2 switch while you allow or deny communication between the segments in a router, and also solutions were you use a layer 3 switch. In that way you can let the routing be performed in the switch, which leads to significantly lower load on the router. The result was that if you are about to segment a local area network it is recommended that you use VLAN and ACL:s, and possibly in combination with a firewall.The final result of this report is that a solution using the “router on a stick”-technique is the cheapest one, and probably the one that most small companies would get along with. However, the solution that you choose depends completely on how much money you want to spend on your network, and also what the needs are.
|
380 |
An Improved Utility Driven Approach Towards K-Anonymity Using Data Constraint RulesMorton, Stuart Michael 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / As medical data continues to transition to electronic formats, opportunities arise for researchers to use this microdata to discover patterns and increase knowledge that can improve patient care. Now more than ever, it is critical to protect the identities of the
patients contained in these databases. Even after removing obvious “identifier”
attributes, such as social security numbers or first and last names, that clearly identify a specific person, it is possible to join “quasi-identifier” attributes from two or more publicly
available databases to identify individuals.
K-anonymity is an approach that has been used to ensure that no one individual
can be distinguished within a group of at least k individuals. However, the majority of the proposed approaches implementing k-anonymity have focused on improving the efficiency of algorithms implementing k-anonymity; less emphasis has been put towards ensuring the “utility” of anonymized data from a researchers’ perspective. We propose a
new data utility measurement, called the research value (RV), which extends existing
utility measurements by employing data constraints rules that are designed to improve
the effectiveness of queries against the anonymized data.
To anonymize a given raw dataset, two algorithms are proposed that use predefined
generalizations provided by the data content expert and their corresponding
research values to assess an attribute’s data utility as it is generalizing the data to
ensure k-anonymity. In addition, an automated algorithm is presented that uses
clustering and the RV to anonymize the dataset. All of the proposed algorithms scale
efficiently when the number of attributes in a dataset is large.
|
Page generated in 0.0432 seconds