• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 8
  • 5
  • 2
  • 2
  • Tagged with
  • 122
  • 122
  • 57
  • 39
  • 35
  • 28
  • 27
  • 24
  • 18
  • 18
  • 17
  • 16
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

How privacy concerns impact Swedish citizens' willingness to report crimes : A quantitative mobile phone survey

Lindqvist, Gunnar January 2022 (has links)
In today's information technology-driven world, most criminal acts leave digital evidence. In such cases, cooperation from victims through the handover of digital devices such as the mobile phone isa success factor that enables evidence-seeking through digital forensics. Unfortunately, forensic examination of the victims' devices becomes a potential additional negative consequence for the victim who experiences an invasion of privacy. The privacy invasion can make victims of crime less cooperative and willing to report crimes, leading to a reduced number of criminals held accountable for their actions. To address this problem, 400 Swedish adults were surveyed to identify their hypothetical willingness to report certain crimes. The survey examined the impact a mobile phone handover made on the willingness to report a crime. The findings demonstrated significantly lower willingness to report a crime when a mobile phone was necessary as evidence. However, the data could not support privacy as a common tendency cause. The presented results can be used as a reference for further research on attitudes and behaviours regarding the subject.
72

Navigating the Shadows : Overcoming Obstacles Posed by Anti-forensic Tools

Svensson, Jonny, Wouters, Samuel January 2024 (has links)
This thesis explores the landscape of Anti-forensics tools (AFTs) and their impact on digital forensic investigations. It begins with a comprehensive review of relevant literature and then delves into the methods employed by AFTs, including data encryption and file manipulation. A survey of popular AFTs, such as GnuPG, AESCrypt, Tails, StegHide, or CCleaner, provides insights into their functionalities and implications for forensic analysis. Results from experimentation and discussions on method validation, hardware benchmarks, and the ethical implications of AFT usage contribute to a comprehensive analysis. Key findings highlight the effectiveness of AFTs in circumventing forensic analysis, raising ethical concerns regarding their valid use. The thesis concludes with reflections on the scope of the study, ethical considerations, and avenues for future research. Through this exploration, the thesis provides valuable insights into the challenges posed by Anti-forensics tools in digital investigations. It underscores the need for continued research and ethical considerations in this evolving field.
73

Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations

Marziale, Lodovico 20 December 2009 (has links)
Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them.
74

Application of Digital Forensic Science to Electronic Discovery in Civil Litigation

Roux, Brian 15 December 2012 (has links)
Following changes to the Federal Rules of Civil Procedure in 2006 dealing with the role of Electronically Stored Information, digital forensics is becoming necessary to the discovery process in civil litigation. The development of case law interpreting the rule changes since their enactment defines how digital forensics can be applied to the discovery process, the scope of discovery, and the duties imposed on parties. Herein, pertinent cases are examined to determine what trends exist and how they effect the field. These observations buttress case studies involving discovery failures in large corporate contexts along with insights on the technical reasons those discovery failures occurred and continue to occur. The state of the art in the legal industry for handling Electronically Stored Information is slow, inefficient, and extremely expensive. These failings exacerbate discovery failures by making the discovery process more burdensome than necessary. In addressing this problem, weaknesses of existing approaches are identified, and new tools are presented which cure these defects. By drawing on open source libraries, components, and other support the presented tools exceed the performance of existing solutions by between one and two orders of magnitude. The transparent standards embodied in the open source movement allow for clearer defensibility of discovery practice sufficiency whereas existing approaches entail difficult to verify closed source solutions. Legacy industry practices in numbering documents based on Bates numbers inhibit efficient parallel and distributed processing of electronic data into paginated forms. The failures inherent in legacy numbering systems is identified, and a new system is provided which eliminates these inhibiters while simultaneously better modeling the nature of electronic data which does not lend itself to pagination; such non-paginated data includes databases and other file types which are machine readable, but not human readable in format. In toto, this dissertation provides a broad treatment of digital forensics applied to electronic discovery, an analysis of current failures in the industry, and a suite of tools which address the weaknesses, problems, and failures identified.
75

Document Forensics Through Textual Analysis

Belvisi, Nicole Mariah Sharon January 2019 (has links)
This project aims at giving a brief overview of the area of research called Authorship Analysis with main focus on Authorship Attribution and the existing methods. The second objective of this project is to test whether one of the main approaches in the field can be still be applied successfully to today's new ways of communicating. The study uses multiple stylometric features to establish the authorship of a text as well as a model based on the TF-IDF model.
76

Hanteringen av integritetsperspektiv inom IT-forensik : En kvalitativ intervjustudie med rättsväsendets aktörer / The management of the privacy perspective in digital forensics : A qualitative interview study with the judiciary's actors

Olsson, Andreas January 2019 (has links)
IT-forensik har funnits i många år och vuxit fram inom brottsutredningar då allt blir mer digitaliserat i vårt samhälle. Samtidigt som det blir mer digitalisering ökar mängden data inom IT. Mycket av vårt privata liv lagras i telefoner eller datorer, till exempel bilder eller personuppgifter. På senare år har integriteten blivit viktigare för varje individ och att följa de mänskliga rättigheterna är ett måste i dagsläget. IT-forensik är i grunden ett integritetsintrång hos den misstänkte och det betyder att aktörerna som utför detta måste vara mer försiktiga och ta hänsyn till de involverades integritet. Syftet med arbetet har varit att undersöka hur aktörerna inom rättsväsendet hanterar integritetsperspektiv under IT-forensiska undersökningar, samt att få en bild av hur detta tillämpas i praktiken. De aktörer som blev intervjuade i denna kvalitativa studie var åklagare, domare och försvarare. Resultatet visar att det finns många gråzoner och det sker mycket personliga bedömningar samt tolkningar hos aktörerna. Lagstiftningen skriver i teorin om hur vissa moment ska gå till, men när det tillämpas i praktiken blir detta betydligt mer komplicerat, vilket resulterar i att aktörerna får åtskilda åsikter och hur det ska tillämpas. Aktörerna finner vidare problematik kring vem som ska få tillgång till vad och hur de ska hantera integriteten. / Digital forensics has been around for many years and has emerged in criminal investigations since more is being digitized in our society. At the same time as there is more digitization, the amount of data within IT increases. Much of our private life is stored in phones or computers, such as pictures or personal data. In recent years, the privacy has become more important for each individual and to follow human rights is a must in the present. Digital forensics in its very core is a privacy infringement of the suspect which means that the actors who perform this must be more cautious and consider the privacy of the people involved. The purpose of this work has been to investigate how the actors in the judicial system manage the perspective of privacy during IT forensic investigations and to get a picture of how this is applied in practice. The actors who were interviewed in this qualitative study were prosecutors, judges and defenders. The results show that there are a lot of gray areas and that very personal assessments and interpretations are made by the actors. The law, in theory, explains how certain elements should be done, but when applied in practice, this becomes much more complicated. This results in the actors getting separate opinions of how they should be applied, furthermore who should have access to what and how to handle the privacy.
77

Método para ranqueamento e triagem de computadores aplicado à perícia de informática. / Method for computer ranking and triage applied to computer forensics.

Barbosa, Akio Nogueira 08 October 2015 (has links)
Considerando-se que uma das tarefas mais comuns para um perito judicial que atua na área da informática é procurar vestígios de interesse no conteúdo de dispositivos de armazenamento de dados (DADs), que esses vestígios na maioria das vezes consistem em palavras-chave (PChs) e durante o tempo necessário para realização da duplicação do DAD o perito fica praticamente impossibilitado de interagir com os dados contidos no mesmo, decidiu-se verificar a hipótese de que seja possível na etapa de coleta, realizar simultaneamente à duplicação do DAD a varredura para procurar PCHs em dados brutos (raw data), sem com isso impactar significativamente o tempo de duplicação. O principal objetivo desta tese é propor um método que possibilite identificar os DADs com maior chance de conter vestígios de interesse para uma determinada perícia ao término da etapa de coleta, baseado na quantidade de ocorrências de PCHs encontradas por um mecanismo de varredura que atua no nível de dados brutos. A partir desses resultados é realizada uma triagem dos DADs. Com os resultados da triagem é realizado um processo de ranqueamento, indicando quais DADs deverão ser examinados prioritariamente na etapa de análise. Os resultados dos experimentos mostraram que é possível e viável a aplicação do método sem onerar o tempo de duplicação e com um bom nível de precisão. Em muitos de casos, a aplicação do método contribui para a diminuição da quantidade de DADs que devem ser analisados, auxiliando a diminuir o esforço humano necessário. / Considering that one of the most common tasks for a legal expert acting in the information technology area is to look for invidences of interest in the content data storage devices (DADs). In most cases these evidences consist of keywords. During the time necessary to perform the DAD duplication, the expert is practically unable to interact with the data contained on DAD. In this work we have decided to verify the following hypothesis: It is possible, at the collection stage, to simultaneously hold the duplication of the DAD and scan to search for keywords in raw data, without thereby significantly impact the duplication time. The main objective of this thesis is to propose a method that allows to identify DADs with a strong chance of containing evidences of interest for a particular skill at the end of the collection stage, based on the keywords occurrences found by a scanner mechanism that operates at the raw data level. Based on these results, a triage of DADs is established. With the results of the triage, a ranking process is made, providing an indication of which DADs should be examined first at the analysis stage. The results of the ours experiments showed that it is possible and feasible to apply the method without hindering the duplication time and with a certain level of accuracy. In most cases, the application of the method contributes to reduce the number of DADs that must be analyzed, helping to reduces the human effort required.
78

Método para ranqueamento e triagem de computadores aplicado à perícia de informática. / Method for computer ranking and triage applied to computer forensics.

Akio Nogueira Barbosa 08 October 2015 (has links)
Considerando-se que uma das tarefas mais comuns para um perito judicial que atua na área da informática é procurar vestígios de interesse no conteúdo de dispositivos de armazenamento de dados (DADs), que esses vestígios na maioria das vezes consistem em palavras-chave (PChs) e durante o tempo necessário para realização da duplicação do DAD o perito fica praticamente impossibilitado de interagir com os dados contidos no mesmo, decidiu-se verificar a hipótese de que seja possível na etapa de coleta, realizar simultaneamente à duplicação do DAD a varredura para procurar PCHs em dados brutos (raw data), sem com isso impactar significativamente o tempo de duplicação. O principal objetivo desta tese é propor um método que possibilite identificar os DADs com maior chance de conter vestígios de interesse para uma determinada perícia ao término da etapa de coleta, baseado na quantidade de ocorrências de PCHs encontradas por um mecanismo de varredura que atua no nível de dados brutos. A partir desses resultados é realizada uma triagem dos DADs. Com os resultados da triagem é realizado um processo de ranqueamento, indicando quais DADs deverão ser examinados prioritariamente na etapa de análise. Os resultados dos experimentos mostraram que é possível e viável a aplicação do método sem onerar o tempo de duplicação e com um bom nível de precisão. Em muitos de casos, a aplicação do método contribui para a diminuição da quantidade de DADs que devem ser analisados, auxiliando a diminuir o esforço humano necessário. / Considering that one of the most common tasks for a legal expert acting in the information technology area is to look for invidences of interest in the content data storage devices (DADs). In most cases these evidences consist of keywords. During the time necessary to perform the DAD duplication, the expert is practically unable to interact with the data contained on DAD. In this work we have decided to verify the following hypothesis: It is possible, at the collection stage, to simultaneously hold the duplication of the DAD and scan to search for keywords in raw data, without thereby significantly impact the duplication time. The main objective of this thesis is to propose a method that allows to identify DADs with a strong chance of containing evidences of interest for a particular skill at the end of the collection stage, based on the keywords occurrences found by a scanner mechanism that operates at the raw data level. Based on these results, a triage of DADs is established. With the results of the triage, a ranking process is made, providing an indication of which DADs should be examined first at the analysis stage. The results of the ours experiments showed that it is possible and feasible to apply the method without hindering the duplication time and with a certain level of accuracy. In most cases, the application of the method contributes to reduce the number of DADs that must be analyzed, helping to reduces the human effort required.
79

Cluster-Slack Retention Characteristics: A Study of the NTFS Filesystem

Blacher, Zak January 2010 (has links)
<p>This paper explores the statistical properties of microfragment recovery techniques used on NTFS filesystems in the use of digital forensics. A microfragment is the remnant file-data existing in the cluster slack after this file has been overwritten. The total amount of cluster slack is related to the size distribution of the overwriting files as well as to the size of cluster. Experiments have been performed by varying the size distributions of the overwriting files as well as the cluster sizes of the partition. These results are then compared with existing analytical models.</p> / FIVES
80

Completing the Picture : Fragments and Back Again

Karresand, Martin January 2008 (has links)
<p>Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented.</p><p>The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand.</p><p>The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration.</p><p>Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers.</p><p>The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty.</p>

Page generated in 0.1029 seconds