• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 3
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A basis for intrusion detection in distributed systems using kernel-level data tainting. / Détection d'intrusions dans les systèmes distribués par propagation de teinte au niveau noyau

Hauser, Christophe 19 June 2013 (has links)
Les systèmes d'information actuels, qu'il s'agisse de réseaux d'entreprises, deservices en ligne ou encore d'organisations gouvernementales, reposent trèssouvent sur des systèmes distribués, impliquant un ensemble de machinesfournissant des services internes ou externes. La sécurité de tels systèmesd'information est construite à plusieurs niveaux (défense en profondeur). Lors de l'établissementde tels systèmes, des politiques de contrôle d'accès, d'authentification, defiltrage (firewalls, etc.) sont mises en place afin de garantir lasécurité des informations. Cependant, ces systèmes sont très souventcomplexes, et évoluent en permanence. Il devient alors difficile de maintenirune politique de sécurité sans faille sur l'ensemble du système (quand bienmême cela serait possible), et de résister aux attaques auxquelles ces servicessont quotidiennement exposés. C'est ainsi que les systèmes de détectiond'intrusions sont devenus nécessaires, et font partie du jeu d'outils desécurité indispensables à tous les administrateurs de systèmes exposés enpermanence à des attaques potentielles.Les systèmes de détection d'intrusions se classifient en deux grandes familles,qui diffèrent par leur méthode d'analyse: l'approche par scénarios et l'approchecomportementale. L'approche par scénarios est la plus courante, et elle estutilisée par des systèmes de détection d'intrusions bien connus tels queSnort, Prélude et d'autres. Cette approche consiste à reconnaître des signaturesd'attaques connues dans le trafic réseau (pour les IDS réseau) et des séquencesd'appels systèmes (pour les IDS hôtes). Il s'agit donc de détecter descomportements anormaux du système liés à la présence d'attaques. Bien que l'onpuisse ainsi détecter un grand nombre d'attaques, cette approche ne permet pasde détecter de nouvelles attaques, pour lesquelles aucune signature n'estconnue. Par ailleurs, les malwares modernes emploient souvent des techniquesdites de morphisme binaire, afin d'échapper à la détection parsignatures.L'approche comportementale, à l'inverse de l'approche par signature, se basesur la modélisation du fonctionnement normal du système. Cette approche permetainsi de détecter de nouvelles attaques tout comme des attaques plus anciennes,n'ayant recours à aucune base de données de connaissance d'attaques existantes.Il existe plusieurs types d'approches comportementales, certains modèles sontstatistiques, d'autres modèles s'appuient sur une politique de sécurité.Dans cette thèse, on s'intéresse à la détection d'intrusions dans des systèmesdistribués, en adoptant une approche comportementale basée sur une politique desécurité. Elle est exprimée sous la forme d'une politique de flux d'information. Les fluxd'informations sont suivis via une technique de propagation de marques (appeléeen anglais « taint marking ») appliquées sur les objets du systèmed'exploitation, directement au niveau du noyau. De telles approchesexistent également au niveau langage (par exemple par instrumentation de lamachine virtuelle Java, ou bien en modifiant le code des applications), ou encoreau niveau de l'architecture (en émulant le microprocesseur afin de tracer lesflux d'information entre les registres, pages mémoire etc.), etpermettent ainsi une analyse fine des flux d'informations. Cependant, nous avons choisi de nous placer au niveau du système d'exploitation, afin de satisfaire les objectifs suivants:• Détecter les intrusions à tous les niveaux du système, pas spécifiquement au sein d'une ou plusieurs applications.• Déployer notre système en présence d'applications natives, dont le code source n'est pas nécessairement disponible (ce qui rend leur instrumentation très difficile voire impossible).• Utiliser du matériel standard présent sur le marché. Il est très difficile de modifier physiquement les microprocesseurs, et leur émulation a un impact très important sur les performances du système. / Modern organisations rely intensively on information and communicationtechnology infrastructures. Such infrastructures offer a range of servicesfrom simple mail transport agents or blogs to complex e-commerce platforms,banking systems or service hosting, and all of these depend on distributedsystems. The security of these systems, with their increasing complexity, isa challenge. Cloud services are replacing traditional infrastructures byproviding lower cost alternatives for storage and computational power, butat the risk of relying on third party companies. This risk becomesparticularly critical when such services are used to host privileged companyinformation and applications, or customers' private information. Even in thecase where companies host their own information and applications, the adventof BYOD (Bring Your Own Device) leads to new security relatedissues.In response, our research investigated the characterization and detection ofmalicious activities at the operating system level and in distributedsystems composed of multiple hosts and services. We have shown thatintrusions in an operating system spawn abnormal information flows, and wedeveloped a model of dynamic information flow tracking, based on taintmarking techniques, in order to detect such abnormal behavior. We trackinformation flows between objects of the operating system (such as files,sockets, shared memory, processes, etc.) and network packetsflowing between hosts. This approach follows the anomaly detection paradigm.We specify the legal behavior of the system with respect to an informationflow policy, by stating how users and programs from groups of hosts areallowed to access or alter each other's information. Illegal informationflows are considered as intrusion symptoms. We have implemented this modelin the Linux kernel (the source code is availableat http://www.blare-ids.org), as a Linux Security Module (LSM), andwe used it as the basis for practical demonstrations. The experimentalresults validated the feasibility of our new intrusion detection principles.
12

RedirFS - portace na jiné OS / Porting of RedirFS on Other OS

Czerner, Lukáš January 2010 (has links)
This thesis describes preparation for porting as well aw porting itself of RedirFS Linux kernel module to FreeBSD. Basic differences between Linux and FreeBSD kernels are described as well as differences in implementation of the Virtual Filesystem, crucial part for RedirFS. Further there are described possibilities and different approaches to implementation RedirFS functionality to FreeBSD. Then, the possibilities are evaluated and ideal approach is proposed. Next chapters introduces required functionality of the new module as well as its solutions. Then the implementation details are described so the reader can very well understand how the new module works and how the required functionality is implemented into the module.
13

Realizace internetové brány na Linuxu s pokročilým filtrováním / Establishment of the Linux internet gateway using advanced filtering

Matocha, Tomáš January 2009 (has links)
The thesis Establishment of the Linux internet gateway using advanced filtering focuses on~the installation of~the Linux operating system on~the older computers, that functions as a gateway to connect clients in the internal network to the Internet. The thesis describes creation an advanced filter with using iptables. Shows some types of security against attacks from the Internet. The other chapters are discussed, advanced traffic control mechanism (such as a TC and a qdisc). The system queue, it is highly beneficial where it is necessary to hierarchically divide traffic between users. It describes types of queue and assembled configurations for clients in the internal network. Next chapter describes the DNS server caching-only type and application denyhosts, which increases the overall security system. Have your own DNS server is certified, especially if we want to reduce the data traffic. Last chapter describes the RADIUS server and its implementation using Apache and MySQL database. Furthermore, the configuration options are described and the examples of the particular configurations are provided. Finally, it presented a system for authentication through the RADIUS server. The thesis seeks to provide a~complex view of security and filtering.
14

Embedded Communication Channel for Node Communication in WDM Networks

Rosén, Anders January 2015 (has links)
Optical Transport Network is a set of Optical Network Elements (NE) connected by optical fiber links able to provide support for optical networking using Wavelength-Division Multiplexing (WDM). In order to be able to introduce link-level applications that require NE-to-NE communication in a packet-optical network, an embedded communication channel is needed. Examples of such applications are dual-ended protection, remote configurationand path trace. By implementing a NE-to-NE communication channel, the exchange of commands and information will allow for implementation of applications that will increase the data link stability in the network. The purpose of this work has been to prove the feasibility of such a channel. This thesis discusses the possibilities of implementing such a channel adjusted to Transmode's layer 1 products without causing disturbance inthe regular traffic or affecting any existing embedded communication. It also proves the channels function in a proof-of-concept manner by demonstrating a simple Path trace application run upon an implementation of the channel on hardware. The chosen solution is an Embedded Communication Channel driver intended to provide termination points for an Embedded Communication Channel (ECC), supervising the connectivity of the channel and relay messages to applications. This thesis project has been carried out at Infinera Corporation (earlier Transmode Systems AB) during summer/autumn 2015.
15

A Case for Protecting Huge Pages from the Kernel

Patel, Naman January 2016 (has links) (PDF)
Modern architectures support multiple size pages to facilitate applications that use large chunks of contiguous memory either for buffer allocation, application specific memory management, in-memory caching or garbage collection. Most general purpose processors support larger page sizes, for e.g. x86 architecture supports 2MB and 1GB pages while PowerPC architecture supports 64KB, 16MB, 16GB pages. Such larger size pages are also known as superpages or huge pages. With the help of huge pages TLB reach can be increased significantly. The Linux kernel can transparently use these huge pages to significantly bring down the cost of TLB translations. With Transparent Huge Pages (THP) support in Linux kernel the end users or the application developers need not make any change to their application. Memory fragmentation which has been one of the classical problems in computing systems for decades is a key problem for the allocation of huge pages. Ubiquitous huge page support across architectures makes effective fragmentation management even more critical for modern systems. Applications tend to stress system TLB in the absence of huge pages, for virtual to physical address translation, which adversely affects performance/energy characteristics in long running systems. Since most kernel pages tend to be unmovable, fragmentation created due to their misplacement is more problematic and nearly impossible to recover with memory compaction. In this work, we explore physical memory manager of Linux and the interaction of kernel page placement with fragmentation avoidance and recovery mechanisms. Our analysis reveals that not only a random kernel page layout thwarts the progress of memory compaction; it can actually induce more fragmentation in the system. To address this problem, we propose a new allocator which takes special care for the placement of kernel pages. We propose a new region which represents memory area having kernel as well as user pages. Using this new region we introduce a staged allocator which with change in fragmentation level adapts and optimizes the kernel page placement. Later we introduce Illuminator which with zero overhead outperforms default kernel in terms of huge page allocation success rate and compaction overhead with respect to each huge page. We also show that huge page allocation is not a one dimensional problem but a two fold concern with how the fragmentation recovery mechanism may potentially interfere with the page clustering policy of allocator and worsen the fragmentation. Our results show that with effective kernel page placements the mixed page block counts reduces upto 70%, which allows our system to allocate 3x-4x huge pages than the default Kernel. Using these additional huge pages we show up to 38% improvement in terms of energy consumed and reduction in execution time up to 39% on standard benchmarks.
16

Efektivn­ metoda Äten­ adresovch poloek v souborov©m syst©mu Ext4 / An Efficient Way to Allocate and Read Directory Entries in the Ext4 File System

Pazdera, Radek January 2013 (has links)
C­lem t©to prce je zvit vkon sekvenÄn­ho prochzen­ adres v souborov©m syst©mu ext4. Datov struktura HTree, jen je v souÄasn© dobÄ pouita k implementaci adresu v ext4 zvld velmi dobe nhodn© p­stupy do adrese, avak nen­ optimalizovna pro sekvenÄn­ prochzen­. Tato prce pin­ analzu tohoto probl©mu. Nejprve studuje implementaci souborov©ho syst©mu ext4 a dal­ch subsyst©mu Linuxov©ho jdra, kter© s n­m souvis­. Pro vyhodnocen­ vkonu souÄasn© implementace adresov©ho indexu byla vytvoena sada test. Na zkladÄ vsledk tÄchto test bylo navreno een­, kter© bylo nslednÄ implementovno do Linuxov©ho jdra. V zvÄru t©to prce naleznete vyhodnocen­ p­nosu a porovnn­ vkonu nov© implementace s dal­mi souborovmi syst©my v Linuxu.
17

ESPGOAL: A Dependency Driven Communication Framework

Schneider, Timo, Eckelmann, Sven 01 June 2011 (has links)
Optimized implementations of blocking and nonblocking collective operations are most important for scalable high-performance applications. Offloading such collective operations into the communication layer can improve performance and asynchronous progression of the operations. However, it is most important that such offloading schemes remain flexible in order to support user-defined (sparse neighbor) collective communications. In this work, we describe an operating system kernel-based architecture for implementing an interpreter for the flexible Group Operation Assembly Language (GOAL) framework to offload collective communications. We describe an optimized scheme to store the schedules that define the collective operations and show an extension to profile the performance of the kernel layer. Our microbenchmarks demonstrate the effectiveness of the approach and we show performance improvements over traditional progression in user-space. We also discuss complications with the design and offloading strategies in general.:1 Introduction 1.1 Related Work 2 The GOAL API 2.1 API Conventions 2.2 Basic GOAL Functionality 2.2.1 Initialization 2.2.2 Graph Creation 2.2.3 Adding Operations 2.2.4 Adding Dependencies 2.2.5 Scratchpad Buffer 2.2.6 Schedule Compilation 2.2.7 Schedule Execution 2.3 GOAL-Extensions 3 ESP Transport Layer 3.1 Receive Handling 3.2 Transfer Management 3.2.1 Known Problems 4 The Architecture of ESPGOAL 4.1 Control Flow 4.1.1 Loading the Kernel Module 4.1.2 Adding a Communicator 4.1.3 Starting a Schedule 4.1.4 Schedule Progression 4.1.5 Progression by ESP 4.1.6 Unloading the Kernel Module 4.2 Data Structures 4.2.1 Starting a Schedule 4.2.2 Transfer Management 4.2.3 Stack Overflow Avoidance 4.3 Interpreting a GOAL Schedule 5 Implementing Collectives in GOAL 5.1 Recursive Doubling 5.2 Bruck's Algorithm 5.3 Binomial Trees 5.4 MPI_Barrier 5.5 MPI_Gather 6 Benchmarks 6.1 Testbed 6.2 Interrupt coalescing parameters 6.3 Benchmarking Point to Point Latency 6.4 Benchmarking Local Operations 6.5 Benchmarking Collective Communication Latency 6.6 Benchmarking Collective Communication Host Overhead 6.7 Comparing Different Ways to use Ethernet NICs 7 Conclusions and Future Work 8 Acknowledgments
18

Mechanismus pro upgrade BIOSu v Linuxu / Generic BIOS Update Mechanism for Linux

Mariščák, Igor January 2008 (has links)
This work provides overview of creating of a simple driver for the BIOS flash memory by accessing the physical computer memory. Although, the BIOS is one of a system's core components, there is no standardized update mechanism approach. Purpose of thesis is to create module driver by taking advantage of existing interface subsystem MTD, to suggest and implement driver for one specific device to Linux kernel operating system. Also explains technique allowing write access to registers of the flash memory with utilization of configuration file.
19

Computation as Strange Material : Excursions into Critical Accidents

Lagerkvist, Love January 2021 (has links)
Waking up in a world where everyone carries a miniature supercomputer, interaction designers find themselves in their forerunners dreams. Faced with the reality of planetary-scale we have to confront the task of articulating approaches responsive this accidental ubiquity of computation. This thesis attempts such a formulation by defining computation as a strange material, a plasticity shaped equally by its technical properties and the mode of production by which is its continuously re-produced. The definition is applied through a methodology of excursions — participatory explorations into two seemingly disparate sites of computation, connected in they ways they manifest a labor of care. First, we visit the social infrastructures that constitute the Linux kernel, examining strangle entanglements of programming and care in the world's largest design process. This is followed by a tour into the thorny lands of artificial intelligence, situated in the smart replies of LinkedIn. Here, we investigate the fluctuating border between the artificial and the human with participants performing AI, formulating new Turing tests in the process. These excursions afford an understanding of computation as fundamentally re-produced through interaction, a strange kind of affective work the understanding of which is crucial if we ambition to disarm the critical accidents of our present future.

Page generated in 0.0509 seconds