• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cheetah: An Economical Distributed RAM Drive

Tingstrom, Daniel 20 January 2006 (has links)
Current hard drive technology shows a widening gap between the ability to store vast amounts of data and the ability to process. To overcome the problems of this secular trend, we explore the use of available distributed RAM resources to effectively replace a mechanical hard drive. The essential approach is a distributed Linux block device that spreads its blocks throughout spare RAM on a cluster and transfers blocks using network capacity. The presented solution is LAN-scalable, easy to deploy, and faster than a commodity hard drive. The specific driving problem is I/O intensive applications, particularly digital forensics. The prototype implementation is a Linux 2.4 kernel module, and connects to Unix based clients. It features an adaptive prefetching scheme that seizes future data blocks for each read request. We present experimental results based on generic benchmarks as well as digital forensic applications that demonstrate significant performance gains over commodity hard drives.
2

Jämförelse av File Carving Program och presterande vid fragmentering

Kovac, Timmy January 2012 (has links)
Idag har vi mängder av data på våra lagringsmedia vilket gör dem till en värdefull informationskälla. Om denna data raderas betyder det inte att den är borta för alltid. Data finns fortfarande kvar på lagringsmediet och går att återskapa med så kallad file carving. Att kunna återskapa data är en viktig del inom den polisiära verksamheten, men även för andra verksamheter som förlorat data. Det finns dock vissa problem som kan ställa till med besvär när data ska återskapas. När filer inte är lagrade i sammanhängande datablock blir de fragmenterade vilket innebär problem. För att ta reda på hur stort problemet egentligen är utfördes tre test med den kommersiella programvaran EnCase och open source programvarorna Foremost, PhotoRec och Scalpel. Resultatet visade tydligt att gratisprogrammet PhotoRec var bättre än de andra programmen, men kanske mest förvånande är att det kommersiella och dyra programmet EnCase inte lyckades speciellt bra. Det framgick även att samtliga program har stora problem med att återskapa fragmenterade filer, men det var återigen gratisprogramvaran PhotoRec som presterade bäst.
3

Reconstructing Textual File Fragments Using Unsupervised Machine Learning Techniques

Roux, Brian 19 December 2008 (has links)
This work is an investigation into reconstructing fragmented ASCII files based on content analysis motivated by a desire to demonstrate machine learning's applicability to Digital Forensics. Using a categorized corpus of Usenet, Bulletin Board Systems, and other assorted documents a series of experiments are conducted using machine learning techniques to train classifiers which are able to identify fragments belonging to the same original file. The primary machine learning method used is the Support Vector Machine with a variety of feature extractions to train from. Additional work is done in training committees of SVMs to boost the classification power over the individual SVMs, as well as the development of a method to tune SVM kernel parameters using a genetic algorithm. Attention is given to the applicability of Information Retrieval techniques to file fragments, as well as an analysis of textual artifacts which are not present in standard dictionaries.
4

Forensic Multimedia File Carving

Nadeem Ashraf, Muhammad January 2013 (has links)
Distribution of video contents over the Internet has increased drastically over the past few years. With technological advancements and emergence of social media services, video content sharing has grown exponentially. An increased number of cyber crimes today belong to possession or distribution of illegal video contents over the Internet. Therefore, it is crucial for forensic examiners to have the capability of recovering and analyzing illegal video contents from seized storage devices. File carving is an advanced forensic technique used to recover deleted contents from a storage device even when there is no file system present. After recovering a deleted video file, its contents have to be analyzed manually in order to classify them. This is not only very stressful but also takes a large amount of time. In this thesis we propose a carving approach for streaming multimedia formats that allows forensic examiners to recover individual frames of a video file as images. The contents of these images then can be classified using existing techniques for forensic analysis of image sets. A carving tool based on this approach is developed for MPEG-1 video files. A number of experiments are conducted to evaluate performance of the tool. For each experiment an MPEG-1 file with different encoding parameters is used. Moreover, each experiment contains 18 runs and with each run chunk size of the input MPEG-1 file is varied in order to create different amount of disk fragmentation For video only MPEG-1 files, 87.802 % frames are fully recovered when the chunk size is equal to 124 KB. Where as in the case of MPEG-1 files containing both audio and video data 90.55 % frames are fully recovered when the chunk size is 132 KB.
5

Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations

Marziale, Lodovico 20 December 2009 (has links)
Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them.
6

Completing the Picture : Fragments and Back Again

Karresand, Martin January 2008 (has links)
<p>Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented.</p><p>The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand.</p><p>The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration.</p><p>Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers.</p><p>The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty.</p>
7

Completing the Picture : Fragments and Back Again

Karresand, Martin January 2008 (has links)
Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented. The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand. The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration. Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers. The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty.

Page generated in 0.089 seconds