Spelling suggestions: "subject:"linux kernel"" "subject:"linux fernel""
1 |
A case study of cross-branch porting in Linux KernelHua, Jinru 23 July 2014 (has links)
To meet different requirements for different stakeholders, branches are widely used to maintain multiple product variants simultaneously. For example, Linux Kernel has a main development branch, known as the mainline; 35 branches to maintain older product versions which are called stable branches; and hundreds of branches for experimental features. To maintain multiple branch-based product variants in parallel, developers often port new features or bug-fixes from one branch to another. In particular, the process of propagating bug-fixes or feature additions to an older version is commonly called backporting. Prior to our study, backporting practices in large scale projects have not been systematically studied. This lack of empirical knowledge makes it difficult to improve the current backporting process in the industry. We hypothesized that cross-branch porting practice is frequent, repetitive, and error-prone. It required significant effort for developers to select patches that need to be backported and then apply them to the target implementation. We carried out two complementary studies to examine this hypothesis. To investigate the extent and effort of porting practice, this thesis first conducted a quantitative study of backporting activities in Linux Kernel with a total of 8 years version history using the data of the main branch and the 35 stable branches. Our study showed that backporting happened at a rate of 149 changes per month, and it took 51 days to propagate patches on average. 40% of changes in the stable branches were ported from the mainline and 64% of ported patches propagated to more than one branch. Out of all backporting changes from the mainline to stable branches, 97.5% were applied without any manual modifications. To understand how Linux Kernel developers keep up to date with development activities across different branches, we carried out an online survey with engineers who may have ported code from the mainline to stable branches based on our prior analysis of Linux Kernel version history. We received 14 complete responses. The participants have 12.6 years of Linux development experience on average and are either maintainers or experts of Linux Kernel. The survey showed that most backporting work was done by the maintainers who knew the program quite well. Those experienced maintainers could easily identify the edits that need to be ported and propagate them with all relevant changes to ensure consistency in multiple branches. Inexperience developers were seldom given an opportunity to backport features or bug-fixes to stable branches. In summary, based on the version history study and the online survey, we concluded that cross-branch porting is frequent, periodic, and repetitive. It required a manual effort to selectively identify the changes that need to be ported, to analyze the dependency of the selected changes, and to apply all required changes to ensure consistency. To eliminate human's omission mistakes, most backporting work was done only by experienced maintainers who could identify all relevant changes along with the change that needed to be backported. Currently inexperienced developers were excluded from cross-branch porting activities from the mainline to stable branches in Linux Kernel. Our results call for an automated approach to identify the patches that require to be ported, to collect context information to help developers become aware of relevant changes, and to notify pertinent developers who may be responsible for the corresponding porting events. / text
|
2 |
Implementation of TCP Splicing for Proxy Servers on Linux PlatformWang, Cheng-Sheng 10 July 2002 (has links)
The forwarding delay and throughput of a proxy server play significant role in the overall network performance. It is widely known that the forwarding delay of proxy¡¦s application layer is much larger than that of lower layers. This is because for a general purpose operating system, the receiving or sending data in application layer needs to move data through the TCP/IP stack and also cross the user/kernel protection boundaries.
TCP Splice can forward data directly in TCP layer without going up to the application layer. This can be achieved by modifying the packet headers of one TCP connection from the original server to the proxy so that the TCP connection can be seamlessly connected to another TCP connection from the proxy to the client.
To maintain the caching ability of proxy, TCP Tap can duplicate packets before they are forwarded by TCP Splice. The duplicated packets are copied into a tap buffer, so the application layer can read data from the tap buffer. We fully utilize the original TCP receive queue as the tap buffer and allow application layer to read data as usual.
We chose Linux as the platform for experiment. The TCP Splice and Tap are implemented as Linux modules. Finally, we develop an HTTP proxy to test and verify our implementation. It is shown that the performance of proxy in terms of lower forwarding delay, higher throughput, and increased CPU utilization, can be improved significantly.
|
3 |
RLINKS: A MECHANISM FOR NAVIGATING TO RELATED FILESAkarapu, Naveen 01 January 2007 (has links)
This thesis introduces Relative links or rlinks, which are directed labeled links from one file to another in a file system. Rlinks provide a clean way to build and share related-file information without creating additional files and directories. Rlinks form overlay graphs between files of a file system, thus providing useful alternate views of the file system. This thesis implements rlinks for the Linux kernel and modifies the storage structure of the Ext2 file system to store the rlinks.
|
4 |
Design and implementation of the mobile internet protocol on the linux kernel to support internet mobilityThothadri, Radha January 1999 (has links)
No description available.
|
5 |
Implementation and Evaluation of Proportional Share Scheduler on Linux Kernel 2.6Srinivasan, Pradeep Kumar 25 April 2008 (has links)
No description available.
|
6 |
Taintx: A System for Protecting Sensitive DocumentsDillon, Patrice 06 August 2009 (has links)
Across the country members of the workforce are being laid off due to downsizing. Most of those people work for large corporations and have access to important company documents. There have been several studies suggesting that employees are taking critical information after learning they will be laid off. This becomes an issue and a threat to a corporation's security. Corporations are then placed in a position to make sure sensitive documents never leave the company. In this study we build a system that is used to assist corporations and systems administrators. This system will prevent users from taking sensitive documents. The system used in this study helps to maintain a level of security that is not only beneficial but is a crucial part of managing a corporation, and enhancing its ability to compete in an aggressive market.
|
7 |
Componentization in Linux kernel¡Gapproach and toolsFan, Shu-ming 18 July 2007 (has links)
In this thesis, we studied a component-based software design for componentizing Linux kernel. Our goal is to componentize kernel modules and explicitly define the dependency relation of components in the kernel. Componentization can greatly improve composability, evolvability, extensibility and testability of a software system, and can thus increase the productivity of software development and reduce the cost of maintenance. On top of the componentized kernel, we developed a suite of tools to facilitate the operations on kernel components.
In the component-based design, the basic software unit is a component. We envision any subsystem in kernel as a composition of components. To realize the concept, we explicitly create the output ports by augmenting the symbol table of a kernel module to record the relocation information, i.e., the locations where the module invokes the functions exported by other modules. We developed tools to discover the data passing among components such that the dependency relation among components can be clearly disclosed. With componentization in place, we are able to implement the hot-swapping technique which allows the system structure to be dynamically changed at run time. The technique makes it possible to test, swap or re-compose components when part of the system cannot be terminated or removed.
The proposed system is implemented on Linux kernel 2.6.17.1. While our componentization does not introduce any time overhead when modules are in action, we evaluated our approach in terms of module loading time, memory consumption and hot-swapping time. We found that the module loading time and memory consumption of a componentized module are both proportional to the number of relocations in the module. The hot-swapping time is related to the position of the symbol to be swapped in the symbol table. All these suggest that we still have room to improve the way we realized the componentization in Linux kernel.
|
8 |
Boost the Reliability of the Linux Kernel : Debugging kernel oopses / Aider le mainteneur d'applications libres à répondre aux rapports d'erreurGuo, Lisong 18 December 2014 (has links)
Lorsqu'une erreur survient dans le noyau Linux, celui-ci émet un rapport d’erreur appelé "kernel oops" contenant le contexte d’exécution de cette erreur. Les kernel oops décrivent des erreurs réelles de Linux, permettent de classer les efforts de débogage par ordre de priorité et de motiver la conception d’outils permettant d'améliorer la fiabilité du code de Linux. Néanmoins, les informations contenues dans un kernel oops n’ont de sens que si elles sont représentatives et qu'elles peuvent être interprétées correctement. Dans cette thèse, nous étudions une collection de kernel oops provenant d'un dépôt maintenu par Red Hat sur une période de huit mois. Nous considérons l’ensemble des caractéristiques de ces données, dans quelle mesure ces données reflètent d’autres informations à propos de Linux et l’interprétation des caractéristiques pouvant être pertinentes pour la fiabilité de Linux. Nous constatons que ces données sont bien corrélées à d’autres informations à propos de Linux, cependant, elles souffrent parfois de problèmes de duplication et de manque d’informations. Nous identifions également quelques pièges potentiels lors de l'étude des fonctionnalités, telles que les causes d'erreurs fréquentes et les causes d'applications défaillant fréquemment. En outre, un kernel oops fournit des informations précieuses et de première main pour un mainteneur du noyau Linux lui permettant d'effectuer le débogage post-mortem car il enregistre l’état du noyau Linux au moment du crash. Cependant, le débogage sur la seule base des informations contenues dans un kernel oops est difficile. Pour aider les développeurs avec le débogage, nous avons conçu une solution afin d'obtenir la ligne fautive à partir d’un kernel oops, i.e., la ligne du code source qui provoque l'erreur. Pour cela, nous proposons un nouvel algorithme basé sur la correspondance de séquences approximative utilisé dans le domaine de bioinformatique. Cet algorithme permet de localiser automatiquement la ligne fautive en se basant sur le code machine à proximité de celle-ci et inclus dans un kernel oops. Notre algorithme atteint 92% de précision comparé à 26 % pour l’approche traditionnelle utilisant le débogueur gdb. Nous avons intégré notre solution dans un outil nommé OOPSA qui peut ainsi alléger le fardeau pour les développeurs lors du débogage de kernel oops. / When a failure occurs in the Linux kernel, the kernel emits an error report called “kernel oops”, summarizing the execution context of the failure. Kernel oopses describe real Linux errors, and thus can help prioritize debugging efforts and motivate the design of tools to improve the reliability of Linux code. Nevertheless, the information is only meaningful if it is representative and can be interpreted correctly. In this thesis, we study a collection of kernel oopses over a period of 8 months from a repository that is maintained by Red Hat. We consider the overall features of the data, the degree to which the data reflects other information about Linux, and the interpretation of features that may be relevant to reliability. We find that the data correlates well with other information about Linux, but that it suffers from duplicate and missing information. We furthermore identify some potential pitfalls in studying features such as the sources of common faults and common failing applications. Furthermore, a kernel oops provides valuable first-hand information for a Linux kernel maintainer to conduct postmortem debugging, since it logs the status of the Linux kernel at the time of a crash. However, debugging based on only the information in a kernel oops is difficult. To help developers with debugging, we devised a solution to derive the offending line from a kernel oops, i.e., the line of source code that incurs the crash. For this, we propose a novel algorithm based on approximate sequence matching, as used in bioinformatics, to automatically pinpoint the offending line based on information about nearby machine-code instructions, as found in a kernel oops. Our algorithm achieves 92% accuracy compared to 26% for the traditional approach of using only the oops instruction pointer. We integrated the solution into a tool named OOPSA, which would relieve some burden for the developers with the kernel oops debugging.
|
9 |
Exploring Alternative Routes Using Multipath TCPBrennan, Stephen 30 August 2017 (has links)
No description available.
|
10 |
A basis for intrusion detection in distributed systems using kernel-level data tainting.Hauser, Christophe 19 June 2013 (has links) (PDF)
Modern organisations rely intensively on information and communicationtechnology infrastructures. Such infrastructures offer a range of servicesfrom simple mail transport agents or blogs to complex e-commerce platforms,banking systems or service hosting, and all of these depend on distributedsystems. The security of these systems, with their increasing complexity, isa challenge. Cloud services are replacing traditional infrastructures byproviding lower cost alternatives for storage and computational power, butat the risk of relying on third party companies. This risk becomesparticularly critical when such services are used to host privileged companyinformation and applications, or customers' private information. Even in thecase where companies host their own information and applications, the adventof BYOD (Bring Your Own Device) leads to new security relatedissues.In response, our research investigated the characterization and detection ofmalicious activities at the operating system level and in distributedsystems composed of multiple hosts and services. We have shown thatintrusions in an operating system spawn abnormal information flows, and wedeveloped a model of dynamic information flow tracking, based on taintmarking techniques, in order to detect such abnormal behavior. We trackinformation flows between objects of the operating system (such as files,sockets, shared memory, processes, etc.) and network packetsflowing between hosts. This approach follows the anomaly detection paradigm.We specify the legal behavior of the system with respect to an informationflow policy, by stating how users and programs from groups of hosts areallowed to access or alter each other's information. Illegal informationflows are considered as intrusion symptoms. We have implemented this modelin the Linux kernel (the source code is availableat http://www.blare-ids.org), as a Linux Security Module (LSM), andwe used it as the basis for practical demonstrations. The experimentalresults validated the feasibility of our new intrusion detection principles.
|
Page generated in 0.0884 seconds