• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 35
  • 12
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 233
  • 70
  • 51
  • 50
  • 44
  • 42
  • 38
  • 36
  • 30
  • 27
  • 26
  • 25
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

DistroFS: En lösning för distribuerad lagring av filer / DistroFS: A Solution For Distributed File Storage

Hansen, Peter, Norell, Olov January 2007 (has links)
<p>Nuvarande implementationer av distribuerade hashtabeller (DHT) har en begränsad storlek för data som kan lagras, som t.ex. OpenDHTs datastorleks gräns på 1kByte. Är det möjligt att lagra filer större än 1kByte med DHT-tekniken? Finns det någon lösning för att skydda de data som lagrats utan att försämra prestandan? Vår lösning var att utveckla en klient- och servermjukvara. Mjukvaran använder sig av DHT tekniken för att dela upp filer och distribuera delarna över ett serverkluster. För att se om mjukvaran fungerade som tänkt, gjorde vi ett test utifrån de inledande frågorna. Testet visade att det är möjligt att lagra filer större än 1kByte, säkert med DHT tekniken utan att förlora för mycket prestanda.</p> / <p>Currently existing distributed hash table (DHT) implementations use a small storage size for data, such as OpenDHT’s storage size limitation of 1kByte. Is it possible to store larger files than 1kByte using the DHT technique? Is there a way to protect the data without losing to much performance? Our solution was to develop a client and server software. This software uses the DHT technique to split files and distribute their parts across a cluster of servers. To see if the software worked as intended we created a test based on our opening questions. This test shows that it indeed is possible to store large files securely using the DHT technique without losing any significant performance.</p>
22

Využití útoku "Pass the hash attack" na kompromitaci vysoce privilegovaných účtů / Using of the attack "Pass the hash attack" for the compromising of high privileged accounts.

Jakab, Vojtěch January 2014 (has links)
The master thesis deals with the attack "‘pass the hash"’ on high privileged accounts. Within the theoretical part is discussed creating hashes and its use. Next is a descrip- tion of the authentication in Windows operating system. There are also pointed out weaknesses in the design of authentication mechanisms. The last part deals with the individual attack and security options for mitigating the impacts. In the practical part are tested available tools for retrieving hashes from the files of the operating systems and tools which allow the attack itself. The output of this section is selection of the appropriate tools to demonstrate the attack in a proposed real environ- ment. The last topic is about designing the experimental environment, demostration of the attack with the possibility of getting through the network. The last steps deal with mitigating the impact of the attack.
23

Detekce anomálií v síťovém provozu / Network Anomaly Detection

Bartoš, Václav January 2011 (has links)
This work studies systems and methods for anomaly detection in computer networks. At first, basic categories of network security systems and number of methods used for anomaly detection are briefly described. The core of the work is an optimization of the method based on detection of changes in distributions of packet features originally proposed by Lakhina et al. This method is described in detail and two optimizations of it are proposed -- first is focused to speed and memory efficiency, second improves its detection capabilities. Next, a software created to test these optimizations is briefly described and results of experiments on real data with artificially generated and also real anomalies are presented.
24

Improving network performance with a polarization-aware routing approach / Förbättra nätverksprestanda med en polarisationsmedveten routingmetod

Pan, Jingyi January 2023 (has links)
Traffic polarization in networks refers to the phenomenon where traffic tends to concentrate along specific routes or edges when doing multipath routing, leading to imbalanced flow patterns. This spatial distribution of traffic can result in congested and overburdened links, while other routes remain underutilized. Such imbalanced traffic distribution can lead to network bottlenecks, reduced throughput, and compromised Quality of Service for critical applications. These issues emphasize the urgent necessity to address traffic polarization and its detrimental impact on network efficiency and resilience. In this master thesis, we introduce a novel approach to tackle the problem of hash polarization and evaluate the performance of our implementation. Perhaps influenced by the RFC 2992 document, previous works always use the whole value of the hash result to do the multipath routing decisions, and therefore try to mitigate the polarization problem by developing more functions or reusing them. However, we investigate if the polarizion issue can be solved by utilizing different parts of the hash result. In this case, the most critical problem would be how to choose the bits of the hash result for the multipath routing decisions. Unfortunately, during the experiment, we discovered that the optimal performance design is influenced by many factors in the network topology and traffic demand pattern, making it difficult to summarize a universal law. Nevertheless, our research has proposed a mechanism called “bit-awareness”, which can significantly alleviate the problem of selecting overlapping bits, and hence addresses the polarization issue. / Trafikpolarisering i nätverk hänvisar till fenomenet där trafik tenderar att koncentreras längs specifika rutter eller kanter när man gör flervägsdirigering, vilket leder till obalanserade flödesmönster. Denna rumsliga fördelning av trafik kan resultera i överbelastade och överbelastade länkar, medan andra vägar förblir underutnyttjade. Sådan obalanserad trafikdistribution kan leda till nätverksflaskhalsar, minskad genomströmning och försämrad tjänstekvalitet för kritiska applikationer. Dessa frågor betonar det akuta behovet av att ta itu med trafikpolarisering och dess skadliga inverkan på nätverkseffektivitet och motståndskraft. I denna masteruppsats introducerar vi ett nytt tillvägagångssätt för att tackla problemet med hashpolarisering och utvärdera prestandan för vår implementering. Kanske påverkat av RFC 2992-dokumentet, skulle tidigare arbeten använda hela värdet av hashresultatet för att fatta beslut om flervägsdirigering och därför försöka mildra polariseringsproblemet genom att utveckla fler funktioner eller återanvända dem. Vi undrar dock om problemet kan lösas genom att använda olika delar av hashresultatet. I det här fallet skulle det mest avgörande problemet vara hur man väljer bitarna i hashresultatet för besluten om flervägsdirigering. Tyvärr upptäckte vi under experimentet att den optimala prestandadesignen påverkas av många faktorer i nätverkstopologin och trafikefterfrågan, vilket gör det svårt att sammanfatta en universell lag. Ändå har vår forskning föreslagit en mekanism som kallas ”bit-medvetenhet”, som avsevärt kan lindra problemet med att välja överlappande bitar, och därmed adresserar polariseringsfrågan.
25

Preserving privacy with user-controlled sharing of verified information

Bauer, David Allen 13 November 2009 (has links)
Personal information, especially certified personal information, can be very valuable to its subject, but it can also be abused by other parties for identify theft, blackmail, fraud, and more. One partial solution to the problem is credentials, whereby personal information is tied to identity, for example by a photo or signature on a physical credential. We present an efficient scheme for large, redactable, digital credentials that allow certified personal attributes to safely be used to provide identification. A novel method is provided for combining credentials, even when they were originally issued by different authorities. Compared to other redactable digital credential schemes, the proposed scheme is approximately two orders of magnitude faster, due to aiming for auditability over anonymity. In order to expand this scheme to hold other records, medical records for example, we present a method for efficient signatures on redactable data where there are dependencies between different pieces of data. Positive results are shown using both artificial datasets and a dataset derived from a Linux package manager. Electronic credentials must of course be held in a physical device with electronic memory. To hedge against the loss or compromise of the physical device holding a user's credentials, the credentials may be split up. An architecture is developed and prototyped for using split-up credentials, with part of the credentials held by a network attached agent. This architecture is generalized into a framework for running identity agents with various capabilities. Finally, a system for securely sharing medical records is built upon the generalized agent framework. The medical records are optionally stored using the redactable digital credentials, for source verifiability.
26

Perfect Hash Families: Constructions and Applications

Kim, Kyung-Mi January 2003 (has links)
Let <b>A</b> and <b>B</b> be finite sets with |<b>A</b>|=<i>n</i> and |<b>B</b>|=<i>m</i>. An (<i>n</i>,<i>m</i>,<i>w</i>)-<i>perfect hash</i> family</i> is a collection <i>F</i> of functions from <b>A</b> to <b>B</b> such that for any <b>X</b> &#8838; <b>A</b> with |<b>X</b>|=<i>w</i>, there exists at least one ? &#8712; <i>F</i> such that ? is one-to-one when restricted to <b>X</b>. Perfect hash families are basic combinatorial structures and they have played important roles in Computer Science in areas such as database management, operating systems, and compiler constructions. Such hash families are used for memory efficient storage and fast retrieval of items such as reserved words in programming languages, command names in interactive systems, or commonly used words in natural languages. More recently, perfect hash families have found numerous applications to cryptography, for example, to broadcast encryption schemes, secret sharing, key distribution patterns, visual cryptography, cover-free families and secure frameproof codes. In this thesis, we survey constructions and applications of perfect hash families. For constructions, we divided the results into three parts, depending on underlying structure and properties of the constructions: combinatorial structures, linear functionals, and algebraic structures. For applications, we focus on those related to cryptography.
27

Evaluation of Cryptographic Packages

Raheem, Muhammad January 2009 (has links)
<p>The widespread use of computer technology for information handling resulted in the need for higher data protection.The usage of high profile cryptographic protocols and algorithms do not always necessarily guarantee high security. They are needed to be used according to the needs of the organization depending upon certain characteristics and available resources.The communication system in a cryptographic environment may become vulnerable to attacks if the cryptographic packages don’t meet their intended goals.</p><p>This master’s thesis is targeted towards the goal of evaluating contemporary cryptographic algorithms and protocols collectively named as cryptographic packages as per security needs of the organization with the available resources.</p><p>The results have shown that there certainly is a need for careful evaluations of cryptographic packages given with available resources otherwise it could turn into creating more severe problems such as network bottlenecks, information and identity loss, non trustable environment and computational infeasibilities resulting in huge response times. In contrast, choosing the right package with right security parameters can lead to a secure and best performance communication environment.</p>
28

Hash Comparison Module for OCFA

Axelsson, Therese, Melani, Daniel January 2010 (has links)
<p>Child abuse content on the Internet is today an increasing problem and difficult to dealwith. The techniques used by paedophiles are getting more sophisticated which means ittakes more effort of the law enforcement to locate this content.</p><p>To help solving this issue, a EU-funded project named FIVES is developing a set oftools to help investigations involving large amounts of image and video material. One ofthese tools aims to help identifying potentially illegal files by hash signatures derived fromusing classification information from another project.</p> / FIVES
29

An Algorithm for Bootstrapping Communications

Beal, Jacob 13 August 2001 (has links)
I present an algorithm which allows two agents to generate a simple language based only on observations of a shared environment. Vocabulary and roles for the language are learned in linear time. Communication is robust and degrades gradually as complexity increases. Dissimilar modes of experience will lead to a shared kernel vocabulary.
30

Aspects of Metric Spaces in Computation

Skala, Matthew Adam January 2008 (has links)
Metric spaces, which generalise the properties of commonly-encountered physical and abstract spaces into a mathematical framework, frequently occur in computer science applications. Three major kinds of questions about metric spaces are considered here: the intrinsic dimensionality of a distribution, the maximum number of distance permutations, and the difficulty of reverse similarity search. Intrinsic dimensionality measures the tendency for points to be equidistant, which is diagnostic of high-dimensional spaces. Distance permutations describe the order in which a set of fixed sites appears while moving away from a chosen point; the number of distinct permutations determines the amount of storage space required by some kinds of indexing data structure. Reverse similarity search problems are constraint satisfaction problems derived from distance-based index structures. Their difficulty reveals details of the structure of the space. Theoretical and experimental results are given for these three questions in a wide range of metric spaces, with commentary on the consequences for computer science applications and additional related results where appropriate.

Page generated in 0.0423 seconds