311 |
Improving operating systems security: two case studiesWei, Jinpeng 14 August 2009 (has links)
Malicious attacks on computer systems attempt to obtain and maintain illicit control over the victim system. To obtain unauthorized access, they often exploit vulnerabilities in the victim system, and to maintain illicit control, they apply various hiding techniques to remain stealthy. In this dissertation, we discuss and present solutions for two classes of security problems: TOCTTOU (time-of-check-to-time-of-use) and K-Queue. TOCTTOU is a vulnerability that can be exploited to obtain unauthorized root access, and K-Queue is a hiding technique that can be used to maintain stealthy control of the victim kernel.
The first security problem is TOCTTOU, a race condition in Unix-style file systems in which an attacker exploits a small timing gap between a file system call that checks a condition and a use kernel call that depends on the condition. Our contributions on TOCTTOU include: (1) A model that enumerates the complete set of potential TOCTTOU vulnerabilities; (2) A set of tools that detect TOCTTOU vulnerabilities in Linux applications such as vi, gedit, and rpm; (3) A theoretical as well as an experimental evaluation of security risks that shows that TOCTTOU vulnerabilities can no longer be considered "low risk" given the wide-scale deployment of multiprocessors; (4) An event-driven protection mechanism and its implementation that defend Linux applications against TOCTTOU attacks at low performance overhead.
The second security problem addressed in this dissertation is kernel queue or K-Queue, which can be used by the attacker to achieve continual malicious function execution without persistently changing either kernel code or data, which prevents state-of-the-art kernel integrity monitors such as CFI and SBCFI from detecting them. Based on our successful defense against a concrete instance of K-Queue-driven attacks that use the soft timer mechanism, we design and implement a solution to the general class of K-Queue-driven attacks, including (1) a unified static analysis framework and toolset that can generate specifications of legitimate K-Queue requests and the checker code in an automated way; (2) a runtime reference monitor that validates K-Queue invariants and guards such invariants against tampering; and (3) a comprehensive experimental evaluation of our static analysis framework and K-Queue Checkers.
|
312 |
Forensic framework for honeypot analysisFairbanks, Kevin D. 05 April 2010 (has links)
The objective of this research is to evaluate and develop new forensic techniques for use in honeynet environments, in an effort to address areas where anti-forensic techniques defeat current forensic methods. The fields of Computer and Network Security have expanded with time to become inclusive of many complex ideas and algorithms. With ease, a student of these fields can fall into the thought pattern of preventive measures as the only major thrust of the topics. It is equally important to be able to determine the cause of a security breach. Thus, the field of Computer Forensics has grown. In this field, there exist toolkits and methods that are used to forensically analyze production and honeypot systems. To counter the toolkits, anti-forensic techniques have been developed. Honeypots and production systems have several intrinsic differences. These differences can be exploited to produce honeypot data sources that are not currently available from production systems. This research seeks to examine possible honeypot data sources and cultivate novel methods to combat anti-forensic techniques.
In this document, three parts of a forensic framework are presented which were developed specifically for honeypot and honeynet environments. The first, TimeKeeper, is an inode preservation methodology which utilizes the Ext3 journal. This is followed with an examination of dentry logging which is primarily used to map inode numbers to filenames in Ext3. The final component presented is the initial research behind a toolkit for the examination of the recently deployed Ext4 file system. Each respective chapter includes the necessary background information and an examination of related work as well as the architecture, design, conceptual prototyping, and results from testing each major framework component.
|
313 |
La gestion dynamique de la qualité de service dans l'InternetSerban, Rares Dabbous, Walid January 2003 (has links)
Thèse de doctorat : Informatique : Nice : 2003. / La thèse est rédigée en anglais. Bibliogr. p. 131-[141]. Résumés en français et en anglais.
|
314 |
State and file sharing in peer-to-peer systemsZou, Li, January 2003 (has links) (PDF)
Thesis (Ph. D.)--College of Computing, Georgia Institute of Technology, 2004. Directed by Mostafa H. Ammar. / Vita. Includes bibliographical references (leaves 114-118).
|
315 |
Performance evaluation of high performance parallel I/ODhandapani, Mangayarkarasi. January 2003 (has links) (PDF)
Thesis (M.S.)--Mississippi State University. Department of Computer Science and Engineering. / Title from title screen. Includes bibliographical references.
|
316 |
A METHODOLOGY OF SPICE SIMULATION TO EXTRACT SRAM SETUP AND HOLD TIMING PARAMETERS BASED ON DFF DELAY DEGRADATIONZhang, Xiaowei 01 January 2015 (has links)
SRAM is a significant component in high speed computer design, which serves mainly as high speed storage elements like register files in microprocessors, or the interface like multiple-level caches between high speed processing elements and low speed peripherals. One method to design the SRAM is to use commercial memory compiler. Such compiler can generate different density/speed SRAM designs with single/dual/multiple ports to fulfill design purpose. There are discrepancy of the SRAM timing parameters between extracted layout netlist SPICE simulation vs. equation-based Liberty file (.lib) by a commercial memory compiler. This compiler takes spec values as its input and uses them as the starting points to generate the timing tables/matrices in the .lib. Originally large spec values are given to guarantee design correctness. While such spec values are usually too pessimistic when comparing with the results from extracted layout SPICE simulation, which serves as the “golden” rule. Besides, there is no margin information built-in such .lib generated by this compiler.
A new methodology is proposed to get accurate spec values for the input of this compiler to generate more realistic matrices in .lib, which will benefit during the integration of the SRAM IP and timing analysis.
|
317 |
The making of famous and glamorous artists : the role of FILE megazine in the work of General IdeaLamensdorf, Jennie Kathlene 16 February 2012 (has links)
From 1972 until 1989, the artist trio General Idea produced FILE Megazine. The first eight issues of FILE, published from 1972 – 1975, are the focus of this thesis. They stand apart from the later issues because their covers hijacked the look and iconic logo of Life magazine. The red rectangle with white block letters attracted the attention of Time Inc. and resulted in a lawsuit. Rather than fight the corporate giant, General Idea changed their logo after the autumn 1975 issue. FILE, like many artists’ magazines, is typically discussed in idealistic language that privileges the subversive or democratic intentions of the publication while neglecting its significance as a device for the promotion of community and collaboration. I argue that General Idea envisioned FILE as a utopian project intended to produce the world they sought to live in. Authors frequently employ FILE as a tool to discuss General Idea’s work, focusing on it as a mirror or archive of a larger project and emphasizing FILE’s humorous, bawdy, and irreverent aspects. In this thesis, I situate FILE in terms of its historical, art historical, and theoretical frameworks. I pay particular attention to General Idea’s early involvement in the mail art network, FILE’s relationship to 1960s and 1970s artists’ magazines and magazine art, the contemporaneous social and political climate in Canada, and General Idea’s investigation and employment of theoretical frameworks culled from Marshall McLuhan’s text The Medium is the Message and Roland Barthes’ book Mythologies. / text
|
318 |
Detecting publication bias in random effects meta-analysis: An empirical comparison of statistical methodsRendina-Gobioff, Gianna 01 June 2006 (has links)
Publication bias is one threat to validity that researchers conducting meta-analysis studies confront. Two primary goals of this research were to examine the degree to which publication bias impacts the results of a random effects meta-analysis and to investigate the performance of five statistical methods for detecting publication bias in random effects meta-analysis. Specifically, the difference between the population effect size and the estimated meta-analysis effect size, as well as the difference between the population effect size variance and the meta-analysis effect size variance, provided an indication of the impact of publication bias. In addition, the performance of five statistical methods for detecting publication bias (Begg Rank Correlation with sample size, Begg Rank Correlation with variance, Egger Regression, Funnel Plot Regression, and Trim and Fill) were estimated with Type I error rates and statistical power. The overall findings indicate that publication bias notably impacts the meta-analysis effect size and variance estimates. Poor FTSe I error control was exhibited in many conditions by most of the statistical methods. Even when Type I error rates were adequate the power was small, even with larger samples and greater numbers of studies in the meta-analysis.
|
319 |
Implementation of second-order finite elements in the GIFTS structural analysis programHunten, Keith Atherton January 1979 (has links)
No description available.
|
320 |
Μελέτη των παροχών υγείας με τη χρήση εφαρμογών τηλεματικής και πληροφοριακών συστημάτων. Ανάλυση, σχεδιασμός και υλοποίηση διαδικτυακού ηλεκτρονικού φακέλου υγείαςΧρήστου, Χαράλαμπος - Σπυρίδων 22 December 2009 (has links)
Το πληροφοριακά συστήματα βρίσκουν ολοένα και αυξανόμενες εφαρμογές στο χώρο της υγείας. Μία από αυτές αποτελεί ο Ηλεκτρονικός Φάκελος Υγείας (Η.Φ.Υ), ο οποίος επεκτείνει τον παραδοσιακό ιατρικό φάκελο ασθενή σε δυνατότητες και λειτουργίες. Η παρούσα εργασία ασχολείται με το πεδίο αυτό και συγκεκριμένα με το σχεδιασμό, τη μελέτη και την υλοποίηση ενός Η.Φ.Υ. εξειδικευμένο σε πνευμονολογικές παθήσεις. Η εφαρμογή που θα υλοποιηθεί είναι διαδικτυακή και βασίζεται σε σύγχρονες τεχνολογίες όπως είναι η PHP και η SQL. Περιλαμβάνει τη διαχείριση των πληροφοριών ενός ιατρικού φάκελου αλλά και ένα προηγμένο σύστημα διαχείρισης χρηστών. Παράλληλα δε θα αναλυθούν από θεωρητική σκοπιά όλες οι πτυχές του πεδίου μελέτης, ενώ θα γίνει και μία προσπάθεια επικαιροποίησης στη σημερινή πραγματικότητα στην Ελλάδα. / The Information Systems can find more and more application in the field of Health. One of these applications is the Electronic File of Medicine (EFM) which expands the traditional file of Medicine in the abilities and the operations for each patient. This project has been written for this item and specifically for the planning, the studying and the creating of a new Electronic File of Medicine expert in pneumonologic diseases. This application which is going to be created is network-based and based on modern technologies such as the PHP and SQL. This application includes the information management for a file of Medicine and the previous User Management System.
|
Page generated in 0.0265 seconds