• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 40
  • 37
  • 22
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 406
  • 143
  • 128
  • 87
  • 66
  • 61
  • 58
  • 53
  • 44
  • 42
  • 39
  • 38
  • 29
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Digital forensics - Performing virtual primary memory extraction in cloud environments using VMI

Hjerpe, David, Bengtsson, Henrik January 2018 (has links)
Infrastructure as a Service and memory forensics are two subjects which have recently gained increasing amounts of attention. Combining these topics poses new challenges when performing forensic investigations. Forensics targeting virtual machines in a cloud environment is problematic since the devices are virtual, and memory forensics are a newer branch of forensics which is hard to perform and is not well documented. It is, however an area of utmost importance since virtual machines may be targets of, or participate in suspicious activity to the same extent as physical machines. Should such activity require an investigation to be conducted, some data which could be used as evidence may only be found in the primary memory. This thesis aims to further examine memory forensics in cloud environments and expand the academic field of these subjects and help cloud hosting organisations. The objective of this thesis was to study if Virtual Machine Introspection is a valid technique to acquire forensic evidence from the virtual primary memory of a virtual machine. Virtual Machine Introspection is a method of monitoring and analysing a guest via the hypervisor. In order to verify whether Virtual Machine Introspection is a valid forensic technique, the first task was to attempt extracting data from the primary memory which had been acquired using Virtual Machine Introspection. Once extracted, the integrity of the data had to be authenticated. This was done by comparing a hash sum of a file located on a guest with a hash sum of the extracted data. The experiment showed that the two hashes were an exact match. Next, the solidity of the extracted data was tested by changing the memory of a guest while acquiring the memory via Virtual Machine Introspection. This showed that the solidity is heavily compromised because memory acquisition process used was too slow. The final task was to compare Virtual Machine Introspection to acquiring the physical memory of the host. By setting up two virtual machines and examining the primary memory, data from both machines was found where as Virtual Machine Introspection only targets one machine, providing an advantage regarding privacy.
242

Bias Among Forensic Document Examiners: Still a Need for Procedural Changes

Stoel, Reinoud D., Dror, Itiel E., Miller, Larry S. 02 January 2014 (has links)
In 1984, Miller published the paper: Bias among forensic document examiners: A need for procedural changes, with the intent to elicit some concern about the amount of cognitive bias among forensic document examiners. There is a need for the development of procedures regarding how a document examiner can minimize the amount of cognitive bias that may lead to erroneous conclusions by the examiner. Such procedures would serve to demonstrate that a conscientious effort was made by the examiner and the submitting agency to control extraneous variables that could bias the results of the examination. Some 28 years after Miller1 the forensic sciences are confronted with serious criticism with respect to cognitive bias (e.g. Risinger et al.2, and the NAS report3). It appears that not much of Millers suggestions have been applied in practice. No good general procedures have been implemented for minimizing the risk of cognitive bias in most institutes. In this paper we address the main issues raised in the 1984 paper, and describe the current state of affairs with respect to minimizing cognitive bias in the forensic sciences. There is still a need for procedural changes in the forensic sciences.
243

Combating Data Leakage in the Cloud

Dlamini, Moses Thandokuhle January 2020 (has links)
The increasing number of reports on data leakage incidents increasingly erodes the already low consumer confidence in cloud services. Hence, some organisations are still hesitant to fully trust the cloud with their confidential data. Therefore, this study raises a critical and challenging research question: How can we restore the damaged consumer confidence and improve the uptake and security of cloud services? This study makes a plausible attempt at unpacking and answering the research question in order to holistically address the data leakage problem from three fronts, i.e. conflict-aware virtual machine (VM) placement, strong authentication and digital forensic readiness. Consequently, this study investigates, designs and develops an innovative conceptual architecture that integrates conflict-aware VM placement, cutting-edge authentication and digital forensic readiness to strengthen cloud security and address the data leakage problem in the hope of eventually restoring consumer confidence in cloud services. The study proposes and presents a conflict-aware VM placement model. This model uses varying degrees of conflict tolerance levels, the construct of sphere of conflict and sphere of non-conflict. These are used to provide the physical separation of VMs belonging to conflicting tenants that share the same cloud infrastructure. The model assists the cloud service provider to make informed VM placement decisions that factor in their tenants’ security profile and balance it against the relevant cost constraints and risk appetite. The study also proposes and presents a strong risk-based multi-factor authentication mechanism that scales up and down, based on threat levels or risks posed on the system. This ensures that users are authenticated using the right combination of access credentials according to the risk they pose. This also ensures end-to-end security of authentication data, both at rest and in transit, using an innovative cryptography system and steganography. Furthermore, the study proposes and presents a three-tier digital forensic process model that proactively collects and preserves digital evidence in anticipation of a legal lawsuit or policy breach investigation. This model aims to reduce the time it takes to conduct an investigation in the cloud. Moreover, the three-tier digital forensic readiness process model collects all user activity in a forensically sound manner and notifies investigators of potential security incidents before they occur. The current study also evaluates the effectiveness and efficiency of the proposed solution in addressing the data leakage problem. The results of the conflict-aware VM placement model are derived from simulated and real cloud environments. In both cases, the results show that the conflict-aware VM placement model is well suited to provide the necessary physical isolation of VM instances that belong to conflicting tenants in order to prevent data leakage threats. However, this comes with a performance cost in the sense that higher conflict tolerance levels on bigger VMs take more time to be placed, compared to smaller VM instances with low conflict tolerance levels. From the risk-based multifactor authentication point of view, the results reflect that the proposed solution is effective and to a certain extent also efficient in preventing unauthorised users, armed with legitimate credentials, from gaining access to systems that they are not authorised to access. The results also demonstrate the uniqueness of the approach in that even minor deviations from the norm are correctly classified as anomalies. Lastly, the results reflect that the proposed 3-tier digital forensic readiness process model is effective in the collection and storage of potential digital evidence. This is done in a forensically sound manner and stands to significantly improve the turnaround time of a digital forensic investigation process. Although the classification of incidents may not be perfect, this can be improved with time and is considered part of the future work suggested by the researcher. / Thesis (PhD)--University of Pretoria, 2020. / Computer Science / PhD / Unrestricted
244

Optimization of the forensic identification of blood using surface-enhanced Raman spectroscopy

Shaine, Miranda L. 22 August 2020 (has links)
Blood is considered one of the most important types of forensic evidence found at a crime scene. The use of surface-enhanced Raman spectroscopy (SERS) provides a potentially non-destructive and highly sensitive technique for the confirmation of blood and this method can be applied using a portable Raman device with quick sample preparation and processing. Crime scenes are inherently complex and the impact of SERS analysis provides easy use and practical application for in-field sample analysis. SERS is one of the few confirmatory techniques employed for the identification of blood at a crime scene or in the forensic laboratory. This method is able to distinguish between blood and other body fluids by collecting a SERS spectrum from a sample placed on a surface that has been embedded with gold nanoparticles (AuNPs). The AuNPs create an electric field surface enhancement that produces an intense molecular vibrational signal, leading to a SERS enhancement. The SERS enhancement allowed for sensitive blood detection at dilutions greater than 1:10,000. A stain transfer method to the SERS substrate was optimized by extracting dried bloodstains with water, saline, and various acid solutions. Fifty percent aqueous acetic acid solutions was found to be the most efficient in retaining the blood components and releasing the hemoglobin component of blood for detection. The SERS spectrum of blood is a robust signature of hemoglobin that does not significantly change between donors nor over time. Characteristic peaks for the identification of blood are 754, 1513, and 1543 wavenumbers (cm-1), attributed to a pyrrole ring breathing mode (15) and two Cβ-Cβ stretches (11, 38), respectively. These key SERS peaks, high sensitivity, and signal enhancement are favorable when compared to normal Raman spectroscopy. A quick and easy-to-use procedure for on-site sample analysis for the detection of blood on different substrates was developed and applied on a portable Raman device. Various nonporous and porous substrates including glass, ceramic tile, cotton, denim, fleece, nylon, acetate, wool, polyester, wood, and coated wood yielded strong results for identification of bloodstains. In addition, different commercial and in-house SERS substrates were tested to determine effectiveness for the detection and identification of blood. SERS identification of blood for forensic work is a potentially non-destructive and portable tool that can be applied for quick and easy examination of evidence at a crime scene. The high sensitivity and selectivity of SERS provides a robust spectroscopic signature that aids in the confirmation of blood, even when it is not visible to the naked eye. It is a more favorable method when compared to current presumptive and confirmatory tests for blood and can be applied to stains on different SERS substrates and a variety sample surfaces for universal testing.
245

Characteristics and practices of forensics programs in Oregon secondary schools

Sylvester, Gregg T. 01 January 1981 (has links)
Since 1943, six studies have been made of speech education in Oregon public schools. Several make reference to forensics, but none discuss this aspect of speech education in depth. As a result, the role of forensics in the schools has been assumed or denied. With the educational situation as it is, however, it is necessary that we have a greater under.standing of the relationship between forensics and general speech education and language arts education.
246

Elitism revisited : a survey of diversity in college-level forensics programs

Valdivia, Cynthia L. 01 January 1997 (has links)
The American demographic landscape is no longer a homogeneous melting pot where all colors and flavors blend into indistinct variants. The challenges brought about by such a societal shift have made diversity issues increasingly important. Chief among them is the issue of organizational diversity. Although there has been an increase in organizational diversity research, there is a noted lack of organizational diversity research in the area of college-level forensics programs. This study seeks to fill this void. Specifically, the purpose of the study was to describe diversity levels in college and university forensics programs, and to compare current levels with those of five years past. Survey questionnaires were completed by almost 200 college and university coaches in AFA, CEDA, and Phi Rho Pi. The results of the survey show no significant increase in diversity levels has occurred since Swanson's indictment of elitism in 1989. Forensics continues to have an overwhelming white majority of coaches and competitors; two-thirds of all programs indicate no effort has been made to increase diversity. These results suggest forensics may be in a state of stasis, one inconsistent with its evolving environment.
247

Peering into the Dark : A Dark Web Digital Forensic Investigation on Windows 11

Kahlqvist, Johanna, Wilke, Frida January 2023 (has links)
The ability to access the Internet while remaining anonymous is a necessity in today's society. Whistleblowers need it to establish contact with journalists, and individuals living under repressive regimes need it to access essential resources. Anonymity also allows malicious actors to evade identification from law enforcement and share ill-intentioned resources. Therefore, digital forensics is an area that needs to stay up to date with these developments. We investigate what artefacts can be discovered by conducting acquisition and analysis of a Windows 11 computer that has used the Tor browser to browse the Dark Web. Our results identify a variety of artefacts acquired from Windows Registry, active memory, storage, and network traffic. Furthermore, we discuss how these can be used in a digital forensic investigation.
248

Examining Significant Differences of Gunshot Residue Patterns Using Same Make and Model of Firearms in Forensic Distance Determination Tests.

Lewey, Heather 15 December 2007 (has links) (PDF)
In many cases of crimes involving a firearm, police investigators need to know how far the firearm was held from the victim when it was discharged. Knowing this distance, vital questions regarding the re-construction of the crime scene can be known. Often, the original firearm used in commission of a suspected crime is not available for testing or is damaged. Crime laboratories require the original firearm in order to conduct distance determination tests. However, no empirical research has ever been conducted to determine if same make and model firearms produce different results in distance determination testing. It was the purpose of this study to determine if there are significant differences between the same make and model of firearms in distance determination testing. The findings indicate no significant differences; furthermore they imply that if the original firearm is not available, another firearm of the same make and model may be used.
249

A System for Automatic Information Extraction from Log Files

Chhabra, Anubhav 15 August 2022 (has links)
The development of technology, data-driven systems and applications are constantly revolutionizing our lives. We are surrounded by digitized systems/solutions that are transforming and making our lives easier. The criticality and complexity behind these systems are immense. So as to meet user satisfaction and keep up with the business needs, these digital systems should possess high availability, minimum downtime, and mitigate cyber attacks. Hence, system monitoring becomes an integral part of the lifecycle of a digital product/system. System monitoring often includes monitoring and analyzing logs outputted by the systems containing information about the events occurring within a system. The first step in log analysis generally includes understanding and segregating the various logical components within a log line, termed log parsing. Traditional log parsers use regular expressions and human-defined grammar to extract information from logs. Human experts are required to create, maintain and update the database containing these regular expressions and rules. They should keep up with the pace at which new products, applications and systems are being developed and deployed, as each unique application/system would have its own set of logs and logging standards. Logs from new sources tend to break the existing systems as none of the expressions match the signature of the incoming logs. The reasons mentioned above make the traditional log parsers time-consuming, hard to maintain, prone to errors, and not a scalable approach. On the other hand, machine learning based methodologies can help us develop solutions that automate the log parsing process without much intervention from human experts. NERLogParser is one such solution that uses a Bidirectional Long Short Term Memory (BiLSTM) architecture to frame the log parsing problem as a Named Entity Recognition (NER) problem. There have been recent advancements in the Natural Language Processing (NLP) domain with the introduction of architectures like Transformer and Bidirectional Encoder Representations from Transformers (BERT). However, these techniques have not been applied to tackle the problem of information extraction from log files. This gives us a clear research gap to experiment with the recent advanced deep learning architectures. This thesis extensively compares different machine learning based log parsing approaches that frame the log parsing problem as a NER problem. We compare 14 different approaches, including three traditional word-based methods: Naive Bayes, Perceptron and Stochastic Gradient Descent; a graphical model: Conditional Random Fields (CRF); a pre-trained sequence-to-sequence model for log parsing: NERLogParser; an attention-based sequence-to-sequence model: Transformer Neural Network; three different neural language models: BERT, RoBERTa and DistilBERT; two traditional ensembles and three different cascading classifiers formed using the individual classifiers mentioned above. We evaluate the NER approaches using an evaluation framework that offers four different evaluation schemes that not just help in comparing the NER approaches but also help us assess the quality of extracted information. The primary goal of this research is to evaluate the NER approaches on logs from new and unseen sources. To the best of our knowledge, no study in the literature evaluates the NER methodologies in such a context. Evaluating NER approaches on unseen logs helps us understand the robustness and the generalization capabilities of various methodologies. To carry out the experimentation, we use In-Scope and Out-of-Scope datasets. Both the datasets originate from entirely different sources and are entirely mutually exclusive. The In-Scope dataset is used for training, validation and testing purposes, whereas the Out-of-Scope dataset is purely used to evaluate the robustness and generalization capability of NER approaches. To better deal with logs from unknown sources, we propose Log Diversification Unit (LoDU), a unit of our system that enables us to carry out log augmentation and enrichment, which helps make the NER approaches more robust towards new and unseen logs. We segregate our final results on a use-case basis where different NER approaches may be suitable for various applications. Overall, traditional ensembles perform the best in parsing the Out-of-Scope log files, but they may not be the best option to consider for real-time applications. On the other hand, if we want to balance the trade-off between performance and throughput, cascading classifiers can be considered the go-to solution.
250

The Hermeneutics Of The Hard Drive: Using Narratology, Natural Language Processing, And Knowledge Management To Improve The Effectiveness Of The Digital Forensic Process

Pollitt, Mark 01 January 2013 (has links)
In order to protect the safety of our citizens and to ensure a civil society, we ask our law enforcement, judiciary and intelligence agencies, under the rule of law, to seek probative information which can be acted upon for the common good. This information may be used in court to prosecute criminals or it can be used to conduct offensive or defensive operations to protect our national security. As the citizens of the world store more and more information in digital form, and as they live an ever-greater portion of their lives online, law enforcement, the judiciary and the Intelligence Community will continue to struggle with finding, extracting and understanding the data stored on computers. But this trend affords greater opportunity for law enforcement. This dissertation describes how several disparate approaches: knowledge management, content analysis, narratology, and natural language processing, can be combined in an interdisciplinary way to positively impact the growing difficulty of developing useful, actionable intelligence from the ever-increasing corpus of digital evidence. After exploring how these techniques might apply to the digital forensic process, I will suggest two new theoretical constructs, the Hermeneutic Theory of Digital Forensics and the Narrative Theory of Digital Forensics, linking existing theories of forensic science, knowledge management, content analysis, narratology, and natural language processing together in order to identify and extract narratives from digital evidence. An experimental approach will be described and prototyped. The results of these experiments demonstrate the potential of natural language processing techniques to digital forensics.

Page generated in 0.0434 seconds