• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 5
  • 4
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 18
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An analysis of disc carving techniques

Mikus, Nicholas A. 03 1900 (has links)
Approved for public release, distribution is unlimited / Disc carving is an essential element of computer forensic analysis. However the high cost of commercial solutions coupled with the lack of availability of open source tools to perform disc analysis has become a hindrance to those performing analysis on UNIX computers. In addition even expensive commercial products offer only a fairly limited ability to "carve" for various files. In this thesis, an open source tool known as Foremost is modified in such a way as to address the need for such a carving tool in a UNIX environment. An implementation of various heuristics for recognizing file formats will be demonstrated as well as the ability to provide some file system specific support. As a result of these implementations a revision of Foremost will be provided that will be made available as an open source tool to aid analysts in their forensic investigations. / Civilian, Federal Cyber Corps
22

Modell för lösenordsklassning : Utveckling av lösenordsklassificering / Password classification model : Development of password classification

Eriksson, Fredrik January 2017 (has links)
I dagens samhälle är datorer ett naturligt inslag i vår vardag. För de flesta anses datorn vara ett verktyg för att hjälpa dem genom arbetet såväl som i vardagen. Dock finns det en mörkare sida där personer använder sig utav datorn för att begå brott. Den så kallade IT-relaterade brottsligheten ökar och ökar och enligt Brå:s rapport från 2016 har en ökning på 949 % skett i Sverige mellan 2006 till 2015 enligt den officiella kriminalstatistiken vad gäller brott som har IT-inslag (Andersson, Hedqvist, Ring & Skarp, 2016). För att få fast förövarna krävs det medel för att kunna bevisa att ett brott har begåtts. Ett sätt att göra detta är att gå in i datorn för att leta efter bevis. Om den misstänkte förövaren känner till att det finns möjlighet för denne att komma att bli granskad vad händer då? Möjligheter finns att förövaren försöker göra det så svårt som möjligt att ta sig in datorn. Detta kan då ske genom att kryptera systemet genom att använda sig av en så kallad krypteringsalgoritm för att låsa hårddisken. Denna kryptering kan vara väldigt svår att dekryptera och det kan vara enklare att försöka få tag i det rätta lösenordet istället. Denna studie har till syfte att utveckla en modell för lösenordsklassificering. Genom denna modell kan strategier som används när användare skapar sina lösenord identifieras och klassificeras. Detta bidrar till en ökad kunskap om strategier användare har när de skapar lösenord. Då fulldiskkryptering börjar bli en vanligare metod för att hindra någon obehörig från att ta sig in i systemet finns förhoppningen om att modellen ska kunna användas och utvecklas till att skapa ett ramverk för att underlätta arbetet för forensikerna hos polisen samt andra rättsvårdande myndigheter. Med denna modell kan olika strategier som olika typer av användare använder sig av när de skapar lösenord vara av sådan karaktär att de kan klassificeras in i en egen kategori. Om en sådan klassificering kan göras skulle det underlätta arbetet för IT-forensikerna och påskynda processen med att knäcka lösenord. Studien utförs genom att använda en kvalitativ metod samt validering utav modellen. Genom kvalitativa intervjuer samlas information in som sedan analyseras och används för att utveckla en modell för lösenordsklassificering. Arbetet med att utveckla en modell för lösenordsklassificering har bestått av en iterativ process där återkoppling gjorts gentemot de olika intervjuobjekten. Ett utkast till en modell med grund i befintlig forskning skapades. Utkastet diskuterades sedan med de olika intervjuobjekten, som deltagit i studien, i en iterativ process där modellen uppdaterades och återkopplades mot de olika intervjuobjekten. Validering av modellen har genomförts genom att fånga in riktiga lösenord som läckts ut på Internet och sedan testa dessa lösenord mot modellen för lösenordsklassificering. / In modern society, computers are a fundamental part of our lives. For most people, the computer is a tool used in work as well as in home activities. Unfortunately, there is a darker side where people use the computer to commit crimes. The so-called IT-related crimes keep rising in numbers and according to the Swedish Brå:s report from 2016 (Andersson, Hedqvist, Ring & Skarp, 2016) the number of crimes related to it has increased with 949% in Sweden between 2006 and 2015 according to official criminal statistics. To arrest the criminals, evidence is needed. One way to collect the evidence is to enter the computer system to collect proof of the suspect. However, if the suspect feels he or she might be a possible target for an investigation, what might happen? It’s possible the suspect tries to make it as difficult as possible to enter the computer system. This can be done by encryption of the system and use a so-called encryption algorithm to lock down the system. This encryption might be very difficult to decrypt and it might be easier so simply trying to find the correct password instead. The purpose of the study is to develop a model for password classification. With this model, it may be possible to identify and to categorize strategies users use to create their passwords. This study could contribute to create a foundation to support the IT-forensics working at the police departments. With this model, different strategies users use when creating passwords could be of a certain type that the strategy could perhaps be ranged and categorized in its own category. If a classification can be made it might ease the workload for several IT-forensics and hurry up the progress decoding the password. The study is conducted by using a qualitative method. By conducting qualitative interviews, information is collected and analyzed. This information will then be used to develop a model for password classification. The work with developing a model for password classification has been an iterative process with collected feedback from the several interview participants. A draft model, based on the existing research was made. The draft of the model was sent out to the interview participants and this draft was discussed and then updated in an iterative process. Feedback of the updated model was collected and applied to the model. The model was then validated by applying real passwords leaked to the Internet and then test these passwords against the model of password classification.
23

Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations

Marziale, Lodovico 20 December 2009 (has links)
Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them.
24

Método para ranqueamento e triagem de computadores aplicado à perícia de informática. / Method for computer ranking and triage applied to computer forensics.

Barbosa, Akio Nogueira 08 October 2015 (has links)
Considerando-se que uma das tarefas mais comuns para um perito judicial que atua na área da informática é procurar vestígios de interesse no conteúdo de dispositivos de armazenamento de dados (DADs), que esses vestígios na maioria das vezes consistem em palavras-chave (PChs) e durante o tempo necessário para realização da duplicação do DAD o perito fica praticamente impossibilitado de interagir com os dados contidos no mesmo, decidiu-se verificar a hipótese de que seja possível na etapa de coleta, realizar simultaneamente à duplicação do DAD a varredura para procurar PCHs em dados brutos (raw data), sem com isso impactar significativamente o tempo de duplicação. O principal objetivo desta tese é propor um método que possibilite identificar os DADs com maior chance de conter vestígios de interesse para uma determinada perícia ao término da etapa de coleta, baseado na quantidade de ocorrências de PCHs encontradas por um mecanismo de varredura que atua no nível de dados brutos. A partir desses resultados é realizada uma triagem dos DADs. Com os resultados da triagem é realizado um processo de ranqueamento, indicando quais DADs deverão ser examinados prioritariamente na etapa de análise. Os resultados dos experimentos mostraram que é possível e viável a aplicação do método sem onerar o tempo de duplicação e com um bom nível de precisão. Em muitos de casos, a aplicação do método contribui para a diminuição da quantidade de DADs que devem ser analisados, auxiliando a diminuir o esforço humano necessário. / Considering that one of the most common tasks for a legal expert acting in the information technology area is to look for invidences of interest in the content data storage devices (DADs). In most cases these evidences consist of keywords. During the time necessary to perform the DAD duplication, the expert is practically unable to interact with the data contained on DAD. In this work we have decided to verify the following hypothesis: It is possible, at the collection stage, to simultaneously hold the duplication of the DAD and scan to search for keywords in raw data, without thereby significantly impact the duplication time. The main objective of this thesis is to propose a method that allows to identify DADs with a strong chance of containing evidences of interest for a particular skill at the end of the collection stage, based on the keywords occurrences found by a scanner mechanism that operates at the raw data level. Based on these results, a triage of DADs is established. With the results of the triage, a ranking process is made, providing an indication of which DADs should be examined first at the analysis stage. The results of the ours experiments showed that it is possible and feasible to apply the method without hindering the duplication time and with a certain level of accuracy. In most cases, the application of the method contributes to reduce the number of DADs that must be analyzed, helping to reduces the human effort required.
25

Método para ranqueamento e triagem de computadores aplicado à perícia de informática. / Method for computer ranking and triage applied to computer forensics.

Akio Nogueira Barbosa 08 October 2015 (has links)
Considerando-se que uma das tarefas mais comuns para um perito judicial que atua na área da informática é procurar vestígios de interesse no conteúdo de dispositivos de armazenamento de dados (DADs), que esses vestígios na maioria das vezes consistem em palavras-chave (PChs) e durante o tempo necessário para realização da duplicação do DAD o perito fica praticamente impossibilitado de interagir com os dados contidos no mesmo, decidiu-se verificar a hipótese de que seja possível na etapa de coleta, realizar simultaneamente à duplicação do DAD a varredura para procurar PCHs em dados brutos (raw data), sem com isso impactar significativamente o tempo de duplicação. O principal objetivo desta tese é propor um método que possibilite identificar os DADs com maior chance de conter vestígios de interesse para uma determinada perícia ao término da etapa de coleta, baseado na quantidade de ocorrências de PCHs encontradas por um mecanismo de varredura que atua no nível de dados brutos. A partir desses resultados é realizada uma triagem dos DADs. Com os resultados da triagem é realizado um processo de ranqueamento, indicando quais DADs deverão ser examinados prioritariamente na etapa de análise. Os resultados dos experimentos mostraram que é possível e viável a aplicação do método sem onerar o tempo de duplicação e com um bom nível de precisão. Em muitos de casos, a aplicação do método contribui para a diminuição da quantidade de DADs que devem ser analisados, auxiliando a diminuir o esforço humano necessário. / Considering that one of the most common tasks for a legal expert acting in the information technology area is to look for invidences of interest in the content data storage devices (DADs). In most cases these evidences consist of keywords. During the time necessary to perform the DAD duplication, the expert is practically unable to interact with the data contained on DAD. In this work we have decided to verify the following hypothesis: It is possible, at the collection stage, to simultaneously hold the duplication of the DAD and scan to search for keywords in raw data, without thereby significantly impact the duplication time. The main objective of this thesis is to propose a method that allows to identify DADs with a strong chance of containing evidences of interest for a particular skill at the end of the collection stage, based on the keywords occurrences found by a scanner mechanism that operates at the raw data level. Based on these results, a triage of DADs is established. With the results of the triage, a ranking process is made, providing an indication of which DADs should be examined first at the analysis stage. The results of the ours experiments showed that it is possible and feasible to apply the method without hindering the duplication time and with a certain level of accuracy. In most cases, the application of the method contributes to reduce the number of DADs that must be analyzed, helping to reduces the human effort required.
26

Forensic analysis of unallocated space

Lei, Zhenxing 01 June 2011 (has links)
Computer forensics has become an important technology in providing evidence in investigations of computer misuse, attacks against computer systems and more traditional crimes like money laundering and fraud where digital devices are involved. Investigators frequently perform preliminary analysis at the crime scene on suspects‟ devices to determine the existence of any inappropriate materials such as child pornography on them and conduct further analysis after the seizure of computers to glean leads or valuable evidence. Hence, it is crucial to design a tool which is portable and can perform efficient instant analysis. Many tools have been developed for this purpose, such as Computer Online Forensic Evidence Extractor (COFEE), but unfortunately, they become ineffective in cases where forensic data has been removed. In this thesis, we design a portable forensic tool which can be used to compliment COFEE for preliminary screening to analyze unallocated disk space by adopting a space efficient data structure of fingerprint hash tables for storing the massive forensic data from law enforcement databases in a flash drive and utilizing hash tree indexing for fast searching. We also apply group testing to identify the fragmentation point of the file and locate the starting cluster of each fragment based on statistics on the gap between the fragments. Furthermore, in order to retrieve evidence and clues from unallocated space by recovering deleted files, a file structure based carving algorithm for Windows registry hive files is presented based on their internal structure and unique patterns of storage. / UOIT
27

Completing the Picture : Fragments and Back Again

Karresand, Martin January 2008 (has links)
<p>Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented.</p><p>The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand.</p><p>The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration.</p><p>Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers.</p><p>The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty.</p>
28

Digital evidence : representation and assurance

Schatz, Bradley Lawrence January 2007 (has links)
The field of digital forensics is concerned with finding and presenting evidence sourced from digital devices, such as computers and mobile phones. The complexity of such digital evidence is constantly increasing, as is the volume of data which might contain evidence. Current approaches to interpreting and assuring digital evidence rely implicitly on the use of tools and representations made by experts in addressing the concerns of juries and courts. Current forensics tools are best characterised as not easily verifiable, lacking in ease of interoperability, and burdensome on human process. The tool-centric focus of current digital forensics practise impedes access to and transparency of the information represented within digital evidence as much as it assists, by nature of the tight binding between a particular tool and the information that it conveys. We hypothesise that a general and formal representational approach will benefit digital forensics by enabling higher degrees of machine interpretation, facilitating improvements in tool interoperability and validation. Additionally, such an approach will increase human readability. This dissertation summarises research which examines at a fundamental level the nature of digital evidence and digital investigation, in order that improved techniques which address investigation efficiency and assurance of evidence might be identified. The work follows three themes related to this: representation, analysis techniques, and information assurance. The first set of results describes the application of a general purpose representational formalism towards representing diverse information implicit in event based evidence, as well as domain knowledge, and investigator hypotheses. This representational approach is used as the foundation of a novel analysis technique which uses a knowledge based approach to correlate related events into higher level events, which correspond to situations of forensic interest. The second set of results explores how digital forensic acquisition tools scale and interoperate, while assuring evidence quality. An improved architecture is proposed for storing digital evidence, analysis results and investigation documentation in a manner that supports arbitrary composition into a larger corpus of evidence. The final set of results focus on assuring the reliability of evidence. In particular, these results focus on assuring that timestamps, which are pervasive in digital evidence, can be reliably interpreted to a real world time. Empirical results are presented which demonstrate how simple assumptions cannot be made about computer clock behaviour. A novel analysis technique for inferring the temporal behaviour of a computer clock is proposed and evaluated.
29

Inteligência cibernética e uso de recursos semânticos na detecção de perfis falsos no contexto do Big Data / Cybernetic intelligence and use of semantic resources to detect fake profiles in the context of Big Data

Oliveira, José Antonio Maurilio Milagre de [UNESP] 29 April 2016 (has links)
Submitted by JOSÉ ANTONIO MAURILIO MILAGRE DE OLIVEIRA null (ja.milagre@gmail.com) on 2016-05-24T12:10:40Z No. of bitstreams: 1 oliveira_jamm_me_mar.pdf: 2437838 bytes, checksum: eda292f5276e7bed32388a02a57f2187 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2016-05-25T13:32:57Z (GMT) No. of bitstreams: 1 oliveira_jamm_me_mar.pdf: 2437838 bytes, checksum: eda292f5276e7bed32388a02a57f2187 (MD5) / Made available in DSpace on 2016-05-25T13:32:57Z (GMT). No. of bitstreams: 1 oliveira_jamm_me_mar.pdf: 2437838 bytes, checksum: eda292f5276e7bed32388a02a57f2187 (MD5) Previous issue date: 2016-04-29 / O desenvolvimento da Internet transformou o mundo virtual em um repositório infindável de informações. Diariamente, na sociedade da informação, pessoas interagem, capturam e despejam dados nas mais diversas ferramentas de redes sociais e ambientes da Web. Estamos diante do Big Data, uma quantidade inacabável de dados com valor inestimável, porém de difícil tratamento. Não se tem dimensão da quantidade de informação capaz de ser extraída destes grandes repositórios de dados na Web. Um dos grandes desafios atuais na Internet do “Big Data” é lidar com falsidades e perfis falsos em ferramentas sociais, que causam alardes, comoções e danos financeiros significativos em todo o mundo. A inteligência cibernética e computação forense objetivam investigar eventos e constatar informações extraindo dados da rede. Por sua vez, a Ciência da Informação, preocupada com as questões envolvendo a recuperação, tratamento, interpretação e apresentação da informação, dispõe de elementos que quando aplicados neste contexto podem aprimorar processos de coleta e tratamento de grandes volumes de dados, na detecção de perfis falsos. Assim, por meio da presente pesquisa de revisão de literatura, documental e exploratória, buscou-se revisar os estudos internacionais envolvendo a detecção de perfis falsos em redes sociais, investigando técnicas e tecnologias aplicadas e principalmente, suas limitações. Igualmente, apresenta-se no presente trabalho contribuições de áreas da Ciência da Informação e critérios para a construção de ferramentas que se destinem à identificação de perfis falsos, por meio da apresentação de uma proposta de modelo conceitual. Identificou-se, na pesquisa, que a Ciência da Informação pode contribuir com a construção de aplicações e frameworks para que usuários possam identificar e discernir perfis reais de perfis questionáveis, diariamente despejados na Web. / The development of the Internet changed the virtual world in an endless repository of information. Every single day, in an information-society, people change, catch and turn out files in different tools of social network and Web surrounding. We are in front of “The Big Data”, an endless amount of data with invaluable, but hard treating. It doesn’t have some dimension of measure information to be able of extracting from these big Web data repositories. One of the most challenges nowadays on the Internet from the “Big Data” is to identify feelings, anticipating sceneries dealing with falsehood and false profiles social tools, which cause fanfare, upheavals and significant financial losses worldwide in front of our true scene. The cyber intelligence has by objective to look for events and finding information, subtracting dates from the Web. On the other hand, the Information Science, worried with the questions involving recovery, processing, interpretation and presentation of information that has important areas of study capable of being applied in this context hone the collection and treatment processes of large volumes of information (datas). Thus, through this research literature review, documentary and exploratory, the researcher aimed to review the International studies implicating the analysis of large volumes of data on social networking tools in falsehoods detection, investigating applied techniques and technologies and especially their limitations. Based on the identified concepts and areas of Information Science, also (equaly), it’s the scope of this research to show a suggestion of “framework” that is able to detect or indicate falsehoods, specifically, has the ability to detect profiles, false identities and misinformation, rumors, or unreliable information, processing large volumes of data on social networking tools. It was identified with this research, the Information Science can contribute to building skilled frameworks to provide criteria so that users can identify and discern actual content of questionable content, dumped daily on the Web.
30

Completing the Picture : Fragments and Back Again

Karresand, Martin January 2008 (has links)
Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented. The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand. The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration. Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers. The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty.

Page generated in 0.4976 seconds