• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 40
  • 37
  • 22
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 406
  • 143
  • 128
  • 87
  • 66
  • 61
  • 58
  • 53
  • 44
  • 42
  • 39
  • 38
  • 29
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Detection and analysis of connection chains in network forensics

Almulhem, Ahmad 06 April 2010 (has links)
Network forensics is a young member of the bigger family of digital forensics discipline. In particular, it refers to digital forensics in networked environments. It represents an important extension to the model of network security where emphasis is traditionally put on prevention and to a lesser extent on detection. It focuses on the collection, and analysis of network packets and events caused by an intruder for investigative purposes. A key challenge in network forensics is to ensure that the network itself is forensically-ready, by providing an infrastructure to collect and analyze data in real-time. In this thesis, we propose an agent-based network forensics system, which is intended to add real-time network forensics capabilities into a controlled network. We also evaluate the proposed system by deploying and studying it in a real-life environment. Another challenge in network forensics arises because of attackers ability to move around in the network, which results in creating a chain of connections; commonly known as connection chains. In this thesis, we provide an extensive review and taxonomy of connection chains. Then, we propose a novel framework to detect them. The framework adopts a black-box approach by passively monitoring inbound and outbound packets at a host, and analyzing the observed packets using association rule mining. We assess the proposed framework using public network traces, and demonstrate both its efficiency and detection capabilities. We, finally, propose a profiling-based framework to investigate connection chains that are distributed over several ip addresses. The framework utilizes a simple yet extensible hacker model that integrates information about a hacker's linguistic, operating system and time of activity. We establish the effectiveness of the proposed approach through several simulations and an evaluation with real attack data.
202

Flat Quartz-Crystal X-ray Spectrometer for Nuclear Forensics Applications

Goodsell, Alison 2012 August 1900 (has links)
The ability to quickly and accurately quantify the plutonium (Pu) content in pressurized water reactor (PWR) spent nuclear fuel (SNF) is critical for nuclear forensics purposes. One non-destructive assay (NDA) technique being investigated to detect bulk Pu in SNF is measuring the self-induced x-ray fluorescence (XRF). Previous XRF measurements of Three Mile Island (TMI) PWR SNF taken in July 2008 and January 2009 at Oak Ridge National Laboratory (ORNL) successfully illustrated the ability to detect the 103.7 keV x ray from Pu using a planar high-purity germanium (HPGe) detector. This allows for a direct measurement of Pu in SNF. Additional gamma ray and XRF measurements were performed on TMI SNF at ORNL in October 2011 to measure the signal-to-noise ratio for the 103.7 keV peak. Previous work had shown that the Pu/U peak ratio was directly proportional to the Pu/U content and increased linearly with burnup. However, the underlying Compton background significantly reduced the signal-to-noise ratio for the x-ray peaks of interest thereby requiring a prolonged count time. Comprehensive SNF simulations by Stafford et al showed the contributions to the Compton continuum were due to high-energy gamma rays scattering in the fuel, shipping tube, cladding, collimator and detector1. The background radiation was primarily due to the incoherent scattering of the 137Cs 661.7 keV gamma. In this work methods to reduce the Compton background and thereby increase the signal-to-noise ratio were investigated. To reduce the debilitating effects of the Compton background, a crystal x-ray spectrometer system was designed. This wavelength-dispersive spectroscopy technique isolated the Pu and U x rays according to Bragg's law by x-ray diffraction through a crystal structure. The higher energy background radiation was blocked from reaching the detector using a customized collimator and shielding system. A flat quartz-crystal x-ray spectrometer system was designed specifically to fit the constraints and requirements of detecting XRF from SNF. Simulations were performed to design and optimize the collimator design and to quantify the improved signal-to-noise ratio of the Pu and U x-ray peaks. The proposed crystal spectrometer system successfully diffracted the photon energies of interest while blocking the high-energy radiation from reaching the detector and contributing to background counts. The spectrometer system provided a higher signal-to-noise ratio and lower percent error for the XRF peaks of interest from Pu and U. Using the flat quartz-crystal x-ray spectrometer and customized collimation system, the Monte Carlo N-Particle (MCNP) simulations showed the 103.7 keV Pu x-ray peak signal-to-noise ratio improved by a factor of 13 and decreased the percent error by a factor of 3.3.
203

Analysing E-mail Text Authorship for Forensic Purposes

Corney, Malcolm W. January 2003 (has links)
E-mail has become the most popular Internet application and with its rise in use has come an inevitable increase in the use of e-mail for criminal purposes. It is possible for an e-mail message to be sent anonymously or through spoofed servers. Computer forensics analysts need a tool that can be used to identify the author of such e-mail messages. This thesis describes the development of such a tool using techniques from the fields of stylometry and machine learning. An author's style can be reduced to a pattern by making measurements of various stylometric features from the text. E-mail messages also contain macro-structural features that can be measured. These features together can be used with the Support Vector Machine learning algorithm to classify or attribute authorship of e-mail messages to an author providing a suitable sample of messages is available for comparison. In an investigation, the set of authors may need to be reduced from an initial large list of possible suspects. This research has trialled authorship characterisation based on sociolinguistic cohorts, such as gender and language background, as a technique for profiling the anonymous message so that the suspect list can be reduced.
204

Palynomorph retention on clothing under differing conditions

Rowell, Louise January 2009 (has links)
[Truncated abstract] Palynology has been used in a number of criminal cases where pollen and spores (palynomorphs) on clothing has featured as evidence. Pollen and spores are microscopic, generally morphologically unique to a plant genus and often species, resistant to decay, produced in large amounts and are components of soil. These unique features of pollen and spores make palynology a highly valuable forensic tool. Clothing is an excellent collector of pollen and spores as they become trapped in the fabric weave when clothing is brushed against flowering plants, comes into contact with dust, soil or air-borne pollen. Most forensic palynologists have found that palynomorphs from a crime scene may remain on clothing after washing or several days wear. No empirical research has been conducted on the retention of palynomorphs on clothing under differing conditions. Research of this kind is required to provide support for the future presentation and validation of palynological evidence in court. This project examined the relative retention of palynomorphs on clothing that had been worn during a simulated assault in a sheltered garden on the grounds of St George's College, Western Australia. Three replicate control soil samples each were collected from the actual assault scene and the whole garden to provide a baseline palynological profile for comparison to the experimental (Evidentiary) clothing samples. Forty pollen samples from the predominant species of plants in the garden and surrounds were collected, processed and databased as a reference for palynomorph identification. Standard T-shirts and jeans were chosen as the research clothing. During the simulated assault the knees of the jeans and the backs of the T-shirts came into abrasive contact with the soil of the garden for approximately one minute. The clothing then underwent three 'conditions' to simulate 'real life' situations. Three clothing sets were immediately collected after the assault (E1), three sets were worn for a period of three days after the assault (E2) and three sets were washed after the assault (E3). ... The Background clothing samples did not have a profile similar to the research garden but the profiles collected from each set reflected the areas to which they were worn. The number of palynomorphs per gram of garden soil ranged from thousands to tens-of-thousands of palynomorphs. The total number of palynomorphs collected by the E1 samples ranged from 100,000 to millions per clothing item. The E2 samples retained 1000's to tens-of-thousands of palynomorphs and the E3 samples retained 100's to 1000's of palynomorphs. The background clothing samples collected 1000's to tens-of-thousands of palynomorphs. These results confirm that jeans and T-shirts worn during an assault then worn for a period of days, or washed, will still contain pollen and spores characteristic of the assault area. This highlights the importance of investigating police enquiring where and for how long clothing of interest has been worn before and after an event, or if the clothing has been washed since the event. The results of this study will provide forensic palynologists with supportive data for future casework involving clothing.
205

Accelerating digital forensic searching through GPGPU parallel processing techniques

Bayne, Ethan January 2017 (has links)
Background: String searching within a large corpus of data is a critical component of digital forensic (DF) analysis techniques such as file carving. The continuing increase in capacity of consumer storage devices requires similar improvements to the performance of string searching techniques employed by DF tools used to analyse forensic data. As string searching is a trivially-parallelisable problem, general purpose graphic processing unit (GPGPU) approaches are a natural fit. Currently, only some of the research in employing GPGPU programming has been transferred to the field of DF, of which, a closed-source GPGPU framework was used— Complete Unified Device Architecture (CUDA). Findings from these earlier studies have found that local storage devices from which forensic data are read present an insurmountable performance bottleneck. Aim: This research hypothesises that modern storage devices no longer present a performance bottleneck to the currently used processing techniques of the field, and proposes that an open-standards GPGPU framework solution – Open Computing Language (OpenCL) – would be better suited to accelerate file carving with wider compatibility across an array of modern GPGPU hardware. This research further hypothesises that a modern multi-string searching algorithm may be better adapted to fulfil the requirements of DF investigation. Methods: This research presents a review of existing research and tools used to perform file carving and acknowledges related work within the field. To test the hypothesis, parallel file carving software was created using C# and OpenCL, employing both a traditional string searching algorithm and a modern multi-string searching algorithm to conduct an analysis of forensic data. A set of case studies that demonstrate and evaluate potential benefits of adopting various methods in conducting string searching on forensic data are given. This research concludes with a final case study which evaluates the performance to perform file carving with the best-proposed string searching solution and compares the result with an existing file carving tool— Foremost. Results: The results demonstrated from the research establish that utilising the parallelised OpenCL and Parallel Failureless Aho-Corasick (PFAC) algorithm solution demonstrates significantly greater processing improvements from the use of a single, and multiple, GPUs on modern hardware. In comparison to CPU approaches, GPGPU processing models were observed to minimised the amount of time required to search for greater amounts of patterns. Results also showed that employing PFAC also delivers significant performance increases over the BM algorithm. The method employed to read data from storage devices was also seen to have a significant effect on the time required to perform string searching and file carving. Conclusions: Empirical testing shows that the proposed string searching method is believed to be more efficient than the widely-adopted Boyer-Moore algorithms when applied to string searching and performing file carving. The developed OpenCL GPGPU processing framework was found to be more efficient than CPU counterparts when searching for greater amounts of patterns within data. This research also refutes claims that file carving is solely limited by the performance of the storage device, and presents compelling evidence that performance is bound by the combination of the performance of the storage device and processing technique employed.
206

An investigation into the relationship between static and dynamic gait features : a biometrics perspective

Alawar, Hamad Mansoor Mohd Aqil January 2014 (has links)
Biometrics is a unique physical or behavioral characteristic of a person. This unique attribute, such as fingerprints or gait, can be used for identification or verification purposes. Gait is an emerging biometrics with great potential. Gait recognition is based on recognizing a person by the manner in which they walk. Its potential lays in that it can be captured at a distance and does not require the cooperation of the subject. This advantage makes it a very attractive tool for forensic cases and applications, where it can assist in identifying a suspect when other evidence such as DNA, fingerprints, or a face were not attainable. Gait can be used for recognition in a direct manner when the two samples are shot from similar camera resolution, position, and conditions. Yet in some cases, the only sample available is of an incomplete gait cycle, low resolution, low frame rate, a partially visible subject, or a single static image. Most of these conditions have one thing in common: static measurements. A gait signature is usually formed from a number of dynamic and static features. Static features are physical measurements of height, length, or build; while dynamic features are representations of joint rotations or trajectories. The aim of this thesis is to study the potential of predicting dynamic features from static features. In this thesis, we have created a database that utilizes a 3D laser scanner for capturing accurate shape and volumes of a person, and a motion capture system to accurately record motion data. The first analysis focused on analyzing the correlation between twenty-one 2D static features and eight dynamic features. Eleven pairs of features were regarded as significant with the criterion of a P-value less than 0.05. Other features also showed a strong correlation that indicated the potential of their predictive power. The second analysis focused on 3D static and dynamic features. Through the correlation analysis, 1196 pairs of features were found to be significantly correlated. Based on these results, a linear regression analysis was used to predict a dynamic gait signature. The predictors chosen were based on two adaptive methods that were developed in this thesis: "the top-x" method and the "mixed method". The predictions were assessed for both for their accuracy and their classification potential that would be used for gait recognition. The top results produced a 59.21% mean matching percentile. This result will act as baseline for future research in predicting a dynamic gait signature from static features. The results of this thesis bare potential for applications in biomechanics, biometrics, forensics, and 3D animation.
207

Beyond digital, imagens, and forensics : towards a regulation of trust in multimedia communication / Além da análise forense e de imagens em busca da regulamentação de confiança em comunicação multimídia

Schetinger, Victor Chitolina January 2018 (has links)
Esta tese discute o papel da Análise Forense de Imagens como reguladora de mídia digital na sociedade. Isto inclui um estudo com mais de 400 indivíduos para determinar suas capacidades de detectar edições em imagens. Os resultados desse experimento indicam que humanos são facilmente enganados por imagens digitais, tendo dificuldades em diferenciar entre imagens pristinas e editadas. A tese então analisa a efetividade do arsenal de análise forense de imagens contra o estado-da-arte de composição de imagens. Através da análise de padrões fundamentais de imagens, as técnicas forenses são capazes de detectar a presença da maioria das operações de composição testadas. A tese então apresenta uma abordagem alternativa para análise forense de imagens, baseada na geração automática de planos. Ao tratar o processo de inspeção de uma imagem como um plano composto de múltiplos passos, propusemos uma arquitetura que é capaz de indicar os passos necessários para analisar uma imagem. Os planos são baseados em uma modelagem formal do conhecimento e técnicas forenses, de modo que possam ser traduzidos em passos a serem executados. A tese então demonstra que os limites de tal abordagem dependem da dificuldade de validar tal solução. Isso é uma consequência da natureza dos problemas de análise forense de imagens: essencialmente, são problemas de confiança distribuída entre indivíduos com acesso limitado à informação. Essa configuração é analisada de diferentes perspectivas em busca dos limites práticos para a análise forense de imagens digitais. Os resultados dessa análise sugerem que a área falha em produzir soluções acessíveis para a sociedade não por limitações técnicas, mas pela falta de um engajamento multi-disciplinar. A tese então discute como paradoxos filosóficos surgem naturalmente em cenários de análise forense de imagens. A análise forense de imagens digitais lida, essencialmente, com comunicação humana e, como tal, está sujeita a todas suas complexidades. Finalmente, é argumentado que o caminho para construir soluções úteis para a sociedade requer um esforço coletivo de diferentes disciplinas do conhecimento. É responsabilidade da comunidade forense desenvolver uma teoria epistemológica comum e acessível para este projeto coletivo. / This thesis discusses the role of Digital Image Forensics as a regulator of digital media in society. This includes a perceptual study with over 400 subjects to assess their ability to notice editing in images. The results of such experiment indicate that humans are easily fooled by digital images, not being able to tell apart edited and pristine images. The thesis then analyzes the effectiveness of the available arsenal of digital image forensics technology to detect image editing performed by state-of-the-art image-compositing techniques. By analyzing fundamental image patterns, forensics techniques can effectively detect the occurrence of most types of image compositing operations. In response to these two studies, the thesis presents an alternative approach to digital image forensics, based on automated plan generation. By treating the image inspection process as a plan comprised of different steps, it proposes an architecture that is able to guide an analyst choosing the next best step for inspecting an image. The generated plans are flexible, adapting on the fly to the observed results. The plans are based on a formal modelling of current forensics knowledge and techniques, so that they can be translated in steps to be executed. The thesis then shows that the limits of such an approach lie in the difficulty to validate results, which is a consequence of the setup of forensics problems: they are problems of distributed trust among parties with limited information. This scenario is analyzed from different perspectives in search for the practical limits of Digital Image Forensics as a whole. The results of such an analysis suggest that the field is lacking in providing practical and accessible solutions to society due to limited engagement in multidisciplinary research rather than due to limited technical proficiency. The thesis then discusses how paradoxes from philosophy, mathematics, and epistemology arise naturally in both real forensics scenarios, and in the theoretical foundations of the field. Digital Image Forensics ultimately deals with human communication and, as such, it is subject to all its complexities. Finally, it is argued that the path for providing useful solutions for society requires a collective engagement from different disciplines. It is the responsibility of the forensics community to develop a common, accessible epistemological framework for this collective enterprise.
208

Beyond digital, imagens, and forensics : towards a regulation of trust in multimedia communication / Além da análise forense e de imagens em busca da regulamentação de confiança em comunicação multimídia

Schetinger, Victor Chitolina January 2018 (has links)
Esta tese discute o papel da Análise Forense de Imagens como reguladora de mídia digital na sociedade. Isto inclui um estudo com mais de 400 indivíduos para determinar suas capacidades de detectar edições em imagens. Os resultados desse experimento indicam que humanos são facilmente enganados por imagens digitais, tendo dificuldades em diferenciar entre imagens pristinas e editadas. A tese então analisa a efetividade do arsenal de análise forense de imagens contra o estado-da-arte de composição de imagens. Através da análise de padrões fundamentais de imagens, as técnicas forenses são capazes de detectar a presença da maioria das operações de composição testadas. A tese então apresenta uma abordagem alternativa para análise forense de imagens, baseada na geração automática de planos. Ao tratar o processo de inspeção de uma imagem como um plano composto de múltiplos passos, propusemos uma arquitetura que é capaz de indicar os passos necessários para analisar uma imagem. Os planos são baseados em uma modelagem formal do conhecimento e técnicas forenses, de modo que possam ser traduzidos em passos a serem executados. A tese então demonstra que os limites de tal abordagem dependem da dificuldade de validar tal solução. Isso é uma consequência da natureza dos problemas de análise forense de imagens: essencialmente, são problemas de confiança distribuída entre indivíduos com acesso limitado à informação. Essa configuração é analisada de diferentes perspectivas em busca dos limites práticos para a análise forense de imagens digitais. Os resultados dessa análise sugerem que a área falha em produzir soluções acessíveis para a sociedade não por limitações técnicas, mas pela falta de um engajamento multi-disciplinar. A tese então discute como paradoxos filosóficos surgem naturalmente em cenários de análise forense de imagens. A análise forense de imagens digitais lida, essencialmente, com comunicação humana e, como tal, está sujeita a todas suas complexidades. Finalmente, é argumentado que o caminho para construir soluções úteis para a sociedade requer um esforço coletivo de diferentes disciplinas do conhecimento. É responsabilidade da comunidade forense desenvolver uma teoria epistemológica comum e acessível para este projeto coletivo. / This thesis discusses the role of Digital Image Forensics as a regulator of digital media in society. This includes a perceptual study with over 400 subjects to assess their ability to notice editing in images. The results of such experiment indicate that humans are easily fooled by digital images, not being able to tell apart edited and pristine images. The thesis then analyzes the effectiveness of the available arsenal of digital image forensics technology to detect image editing performed by state-of-the-art image-compositing techniques. By analyzing fundamental image patterns, forensics techniques can effectively detect the occurrence of most types of image compositing operations. In response to these two studies, the thesis presents an alternative approach to digital image forensics, based on automated plan generation. By treating the image inspection process as a plan comprised of different steps, it proposes an architecture that is able to guide an analyst choosing the next best step for inspecting an image. The generated plans are flexible, adapting on the fly to the observed results. The plans are based on a formal modelling of current forensics knowledge and techniques, so that they can be translated in steps to be executed. The thesis then shows that the limits of such an approach lie in the difficulty to validate results, which is a consequence of the setup of forensics problems: they are problems of distributed trust among parties with limited information. This scenario is analyzed from different perspectives in search for the practical limits of Digital Image Forensics as a whole. The results of such an analysis suggest that the field is lacking in providing practical and accessible solutions to society due to limited engagement in multidisciplinary research rather than due to limited technical proficiency. The thesis then discusses how paradoxes from philosophy, mathematics, and epistemology arise naturally in both real forensics scenarios, and in the theoretical foundations of the field. Digital Image Forensics ultimately deals with human communication and, as such, it is subject to all its complexities. Finally, it is argued that the path for providing useful solutions for society requires a collective engagement from different disciplines. It is the responsibility of the forensics community to develop a common, accessible epistemological framework for this collective enterprise.
209

Nuclear Fission Weapon Yield, Type, and Neutron Spectrum Determination Using Thin Li-ion Batteries

January 2017 (has links)
abstract: With the status of nuclear proliferation around the world becoming more and more complex, nuclear forensics methods are needed to restrain the unlawful usage of nuclear devices. Lithium-ion batteries are present ubiquitously in consumer electronic devices nowadays. More importantly, the materials inside the batteries have the potential to be used as neutron detectors, just like the activation foils used in reactor experiments. Therefore, in a nuclear weapon detonation incident, these lithium-ion batteries can serve as sensors that are spatially distributed. In order to validate the feasibility of such an approach, Monte Carlo N-Particle (MCNP) models are built for various lithium-ion batteries, as well as neutron transport from different fission nuclear weapons. To obtain the precise battery compositions for the MCNP models, a destructive inductively coupled plasma mass spectrometry (ICP-MS) analysis is utilized. The same battery types are irradiated in a series of reactor experiments to validate the MCNP models and the methodology. The MCNP nuclear weapon radiation transport simulations are used to mimic the nuclear detonation incident to study the correlation between the nuclear reactions inside the batteries and the neutron spectra. Subsequently, the irradiated battery activities are used in the SNL-SAND-IV code to reconstruct the neutron spectrum for both the reactor experiments and the weapon detonation simulations. Based on this study, empirical data show that the lithium-ion batteries have the potential to serve as widely distributed neutron detectors in this simulated environment to (1) calculate the nuclear device yield, (2) differentiate between gun and implosion fission weapons, and (3) reconstruct the neutron spectrum of the device. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2017
210

Forensic Methods and Tools for Web Environments

January 2017 (has links)
abstract: The Web is one of the most exciting and dynamic areas of development in today’s technology. However, with such activity, innovation, and ubiquity have come a set of new challenges for digital forensic examiners, making their jobs even more difficult. For examiners to become as effective with evidence from the Web as they currently are with more traditional evidence, they need (1) methods that guide them to know how to approach this new type of evidence and (2) tools that accommodate web environments’ unique characteristics. In this dissertation, I present my research to alleviate the difficulties forensic examiners currently face with respect to evidence originating from web environments. First, I introduce a framework for web environment forensics, which elaborates on and addresses the key challenges examiners face and outlines a method for how to approach web-based evidence. Next, I describe my work to identify extensions installed on encrypted web thin clients using only a sound understanding of these systems’ inner workings and the metadata of the encrypted files. Finally, I discuss my approach to reconstructing the timeline of events on encrypted web thin clients by using service provider APIs as a proxy for directly analyzing the device. In each of these research areas, I also introduce structured formats that I customized to accommodate the unique features of the evidence sources while also facilitating tool interoperability and information sharing. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017

Page generated in 0.0547 seconds