• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 40
  • 38
  • 22
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 409
  • 144
  • 129
  • 89
  • 66
  • 61
  • 58
  • 54
  • 44
  • 43
  • 40
  • 39
  • 29
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

<b>Forensic Analysis of Images and Documents</b>

Ruiting Shao (18018187) 23 February 2024 (has links)
<p dir="ltr">This thesis involves three topics related to forensic analysis of media data. The first topic is the analysis of images and documents that have been created with a scanner. The goal is to detect and identify scanner model from the scanned images/documents. We propose a deep learning system that can automatically learn the inherent features of the scanned images. This system will produce a scanner model identification and a reliability map for a scanned image. The proposed system has shown promising results in the forensic analysis of scanned images. The second topic is related to forensic integrity of scientific papers. The project is divided into multiple tasks, data collection, image extraction, and manipulation detection. We have constructed a dataset of retracted scientific papers that have been verified to have issues with integrity. We design and maintain a web-based Scientific Integrity System for forensic analysis of the images within scientific publications. The third topic is related to media document analysis. Our goal is to identify the publication style for media document, aiding in the potential document manipulation. We are mainly focusing on image-text consistency check, and synthetic tweets analysis. For image-text inconsistency check, we describe a system that can examine an image in document and the corresponding text caption (or other associated text with the image) to check the image/text consistency. For synthetic tweets analysis, we propose a system to detect and identify the text generation models and paraphrase attack models.</p>
282

Development of Technical Nuclear Forensics for Spent Research Reactor Fuel

Sternat, Matthew Ryan 1982- 14 March 2013 (has links)
Pre-detonation technical nuclear forensics techniques for research reactor spent fuel were developed in a collaborative project with Savannah River National Lab ratory. An inverse analysis method was employed to reconstruct reactor parameters from a spent fuel sample using results from a radiochemical analysis. In the inverse analysis, a reactor physics code is used as a forward model. Verification and validation of different reactor physics codes was performed for usage in the inverse analysis. The verification and validation process consisted of two parts. The first is a variance analysis of Monte Carlo reactor physics burnup simulation results. The codes used in this work are MONTEBURNS and MCNPX/CINDER. Both utilize Monte Carlo transport calculations for reaction rate and flux results. Neither code has a variance analysis that will propagate through depletion steps, so a method to quantify and understand the variance propagation through these depletion calculations was developed. The second verification and validation process consisted of comparing reactor physics code output isotopic compositions to radiochemical analysis results. A sample from an Oak Ridge Research Reactor spent fuel assembly was acquired through a drilling process. This sample was then dissolved in nitric acid and diluted in three different quantities, creating three separate samples. A radiochemical analysis was completed and the results were compared to simulation outputs at different levels ofdetail. After establishing a forward model, an inverse analysis was developed to re-construct the burnup, initial uranium isotopic compositions, and cooling time of a research reactor spent fuel sample. A convergence acceleration technique was used that consisted of an analytical calculation to predict burnup, initial 235U, and 236U enrichments. The analytic calculation results may also be used stand alone or in a database search algorithm. In this work, a reactor physics code is used as a for- ward model with the analytic results as initial conditions in a numerical optimization algorithm. In the numerical analysis, the burnup and initial uranium isotopic com- positions are reconstructed until the iterative spent fuel characteristics converge with the measured data. Upon convergence of the sample’s burnup and initial uranium isotopic composition, the cooling time can be reconstructed. To reconstruct cooling time, the standard decay equation is inverted and solved for time. Two methods were developed. One method uses the converged burnup and initial uranium isotopic compositions along in a reactor depletion simulation. The second method uses an isotopic signature that does not decay out of its mass bin and has a simple production chain. An example would be 137Cs which decays into the stable 137Ba. Similar results are achieved with both methods, but extended shutdown time or time away from power results in over prediction of the cooling time. The over prediction of cooling time and comparison of different burnup reconstruction isotope results are indicator signatures of extended shutdown or time away from power. Due to dynamic operation in time and function, detailed power history reconstruction for research reactors is very challenging. Frequent variations in power, repeated variable shutdown time length, and experimentation history affect the spectrum an individual assembly is burned with such that full reactor parameter reconstruction is difficult. The results from this technical nuclear forensic analysis may be used with law enforcement, intelligence data, macroscopic and microscopic sample characteristics in a process called attribution to suggest or exclude possible sources of origin for a sample.
283

An analysis of disc carving techniques

Mikus, Nicholas A. 03 1900 (has links)
Approved for public release, distribution is unlimited / Disc carving is an essential element of computer forensic analysis. However the high cost of commercial solutions coupled with the lack of availability of open source tools to perform disc analysis has become a hindrance to those performing analysis on UNIX computers. In addition even expensive commercial products offer only a fairly limited ability to "carve" for various files. In this thesis, an open source tool known as Foremost is modified in such a way as to address the need for such a carving tool in a UNIX environment. An implementation of various heuristics for recognizing file formats will be demonstrated as well as the ability to provide some file system specific support. As a result of these implementations a revision of Foremost will be provided that will be made available as an open source tool to aid analysts in their forensic investigations. / Civilian, Federal Cyber Corps
284

Modell för lösenordsklassning : Utveckling av lösenordsklassificering / Password classification model : Development of password classification

Eriksson, Fredrik January 2017 (has links)
I dagens samhälle är datorer ett naturligt inslag i vår vardag. För de flesta anses datorn vara ett verktyg för att hjälpa dem genom arbetet såväl som i vardagen. Dock finns det en mörkare sida där personer använder sig utav datorn för att begå brott. Den så kallade IT-relaterade brottsligheten ökar och ökar och enligt Brå:s rapport från 2016 har en ökning på 949 % skett i Sverige mellan 2006 till 2015 enligt den officiella kriminalstatistiken vad gäller brott som har IT-inslag (Andersson, Hedqvist, Ring &amp; Skarp, 2016). För att få fast förövarna krävs det medel för att kunna bevisa att ett brott har begåtts. Ett sätt att göra detta är att gå in i datorn för att leta efter bevis. Om den misstänkte förövaren känner till att det finns möjlighet för denne att komma att bli granskad vad händer då? Möjligheter finns att förövaren försöker göra det så svårt som möjligt att ta sig in datorn. Detta kan då ske genom att kryptera systemet genom att använda sig av en så kallad krypteringsalgoritm för att låsa hårddisken. Denna kryptering kan vara väldigt svår att dekryptera och det kan vara enklare att försöka få tag i det rätta lösenordet istället. Denna studie har till syfte att utveckla en modell för lösenordsklassificering. Genom denna modell kan strategier som används när användare skapar sina lösenord identifieras och klassificeras. Detta bidrar till en ökad kunskap om strategier användare har när de skapar lösenord. Då fulldiskkryptering börjar bli en vanligare metod för att hindra någon obehörig från att ta sig in i systemet finns förhoppningen om att modellen ska kunna användas och utvecklas till att skapa ett ramverk för att underlätta arbetet för forensikerna hos polisen samt andra rättsvårdande myndigheter. Med denna modell kan olika strategier som olika typer av användare använder sig av när de skapar lösenord vara av sådan karaktär att de kan klassificeras in i en egen kategori. Om en sådan klassificering kan göras skulle det underlätta arbetet för IT-forensikerna och påskynda processen med att knäcka lösenord. Studien utförs genom att använda en kvalitativ metod samt validering utav modellen. Genom kvalitativa intervjuer samlas information in som sedan analyseras och används för att utveckla en modell för lösenordsklassificering. Arbetet med att utveckla en modell för lösenordsklassificering har bestått av en iterativ process där återkoppling gjorts gentemot de olika intervjuobjekten. Ett utkast till en modell med grund i befintlig forskning skapades. Utkastet diskuterades sedan med de olika intervjuobjekten, som deltagit i studien, i en iterativ process där modellen uppdaterades och återkopplades mot de olika intervjuobjekten. Validering av modellen har genomförts genom att fånga in riktiga lösenord som läckts ut på Internet och sedan testa dessa lösenord mot modellen för lösenordsklassificering. / In modern society, computers are a fundamental part of our lives. For most people, the computer is a tool used in work as well as in home activities. Unfortunately, there is a darker side where people use the computer to commit crimes. The so-called IT-related crimes keep rising in numbers and according to the Swedish Brå:s report from 2016 (Andersson, Hedqvist, Ring &amp; Skarp, 2016) the number of crimes related to it has increased with 949% in Sweden between 2006 and 2015 according to official criminal statistics. To arrest the criminals, evidence is needed. One way to collect the evidence is to enter the computer system to collect proof of the suspect. However, if the suspect feels he or she might be a possible target for an investigation, what might happen? It’s possible the suspect tries to make it as difficult as possible to enter the computer system. This can be done by encryption of the system and use a so-called encryption algorithm to lock down the system. This encryption might be very difficult to decrypt and it might be easier so simply trying to find the correct password instead. The purpose of the study is to develop a model for password classification. With this model, it may be possible to identify and to categorize strategies users use to create their passwords. This study could contribute to create a foundation to support the IT-forensics working at the police departments. With this model, different strategies users use when creating passwords could be of a certain type that the strategy could perhaps be ranged and categorized in its own category. If a classification can be made it might ease the workload for several IT-forensics and hurry up the progress decoding the password. The study is conducted by using a qualitative method. By conducting qualitative interviews, information is collected and analyzed. This information will then be used to develop a model for password classification. The work with developing a model for password classification has been an iterative process with collected feedback from the several interview participants. A draft model, based on the existing research was made. The draft of the model was sent out to the interview participants and this draft was discussed and then updated in an iterative process. Feedback of the updated model was collected and applied to the model. The model was then validated by applying real passwords leaked to the Internet and then test these passwords against the model of password classification.
285

A Study of the Impact of Junior High or Middle School Forensic Training on High School Forensic Programs in the Dallas-Fort Worth Metroplex

Ballard, Lynda Dyer 12 1900 (has links)
The purpose of this thesis is to determine the impact of intermediate school forensics on high school forensic programs in the Dallas-Fort Worth metroplex. First, the thesis records student and instructor evaluations of both the intermediate school and high school forensic programs. Second, it compares the evaluations by students with intermediate forensics and students without intermediate forensics. Third, it discusses the impact of intermediate forensics on high school forensic programs. This study reveals that intermediate forensics is beneficial to high school forensics. Previously trained students teach and interest others in high school. They are more confident, have more initiative and win more than other students.
286

Application of Digital Forensic Science to Electronic Discovery in Civil Litigation

Roux, Brian 15 December 2012 (has links)
Following changes to the Federal Rules of Civil Procedure in 2006 dealing with the role of Electronically Stored Information, digital forensics is becoming necessary to the discovery process in civil litigation. The development of case law interpreting the rule changes since their enactment defines how digital forensics can be applied to the discovery process, the scope of discovery, and the duties imposed on parties. Herein, pertinent cases are examined to determine what trends exist and how they effect the field. These observations buttress case studies involving discovery failures in large corporate contexts along with insights on the technical reasons those discovery failures occurred and continue to occur. The state of the art in the legal industry for handling Electronically Stored Information is slow, inefficient, and extremely expensive. These failings exacerbate discovery failures by making the discovery process more burdensome than necessary. In addressing this problem, weaknesses of existing approaches are identified, and new tools are presented which cure these defects. By drawing on open source libraries, components, and other support the presented tools exceed the performance of existing solutions by between one and two orders of magnitude. The transparent standards embodied in the open source movement allow for clearer defensibility of discovery practice sufficiency whereas existing approaches entail difficult to verify closed source solutions. Legacy industry practices in numbering documents based on Bates numbers inhibit efficient parallel and distributed processing of electronic data into paginated forms. The failures inherent in legacy numbering systems is identified, and a new system is provided which eliminates these inhibiters while simultaneously better modeling the nature of electronic data which does not lend itself to pagination; such non-paginated data includes databases and other file types which are machine readable, but not human readable in format. In toto, this dissertation provides a broad treatment of digital forensics applied to electronic discovery, an analysis of current failures in the industry, and a suite of tools which address the weaknesses, problems, and failures identified.
287

API-Based Acquisition of Evidence from Cloud Storage Providers

Barreto, Andres E 11 August 2015 (has links)
Cloud computing and cloud storage services, in particular, pose a new challenge to digital forensic investigations. Currently, evidence acquisition for such services still follows the traditional approach of collecting artifacts on a client device. In this work, we show that such an approach not only requires upfront substantial investment in reverse engineering each service, but is also inherently incomplete as it misses prior versions of the artifacts, as well as cloud-only artifacts that do not have standard serialized representations on the client. In this work, we introduce the concept of API-based evidence acquisition for cloud services, which addresses these concerns by utilizing the officially supported API of the service. To demonstrate the utility of this approach, we present a proof-of-concept acquisition tool, kumodd, which can acquire evidence from four major cloud storage providers: Google Drive, Microsoft One, Dropbox, and Box. The implementation provides both command-line and web user interfaces, and can be readily incorporated into established forensic processes.
288

Document Forensics Through Textual Analysis

Belvisi, Nicole Mariah Sharon January 2019 (has links)
This project aims at giving a brief overview of the area of research called Authorship Analysis with main focus on Authorship Attribution and the existing methods. The second objective of this project is to test whether one of the main approaches in the field can be still be applied successfully to today's new ways of communicating. The study uses multiple stylometric features to establish the authorship of a text as well as a model based on the TF-IDF model.
289

Hanteringen av integritetsperspektiv inom IT-forensik : En kvalitativ intervjustudie med rättsväsendets aktörer / The management of the privacy perspective in digital forensics : A qualitative interview study with the judiciary's actors

Olsson, Andreas January 2019 (has links)
IT-forensik har funnits i många år och vuxit fram inom brottsutredningar då allt blir mer digitaliserat i vårt samhälle. Samtidigt som det blir mer digitalisering ökar mängden data inom IT. Mycket av vårt privata liv lagras i telefoner eller datorer, till exempel bilder eller personuppgifter. På senare år har integriteten blivit viktigare för varje individ och att följa de mänskliga rättigheterna är ett måste i dagsläget. IT-forensik är i grunden ett integritetsintrång hos den misstänkte och det betyder att aktörerna som utför detta måste vara mer försiktiga och ta hänsyn till de involverades integritet. Syftet med arbetet har varit att undersöka hur aktörerna inom rättsväsendet hanterar integritetsperspektiv under IT-forensiska undersökningar, samt att få en bild av hur detta tillämpas i praktiken. De aktörer som blev intervjuade i denna kvalitativa studie var åklagare, domare och försvarare. Resultatet visar att det finns många gråzoner och det sker mycket personliga bedömningar samt tolkningar hos aktörerna. Lagstiftningen skriver i teorin om hur vissa moment ska gå till, men när det tillämpas i praktiken blir detta betydligt mer komplicerat, vilket resulterar i att aktörerna får åtskilda åsikter och hur det ska tillämpas. Aktörerna finner vidare problematik kring vem som ska få tillgång till vad och hur de ska hantera integriteten. / Digital forensics has been around for many years and has emerged in criminal investigations since more is being digitized in our society. At the same time as there is more digitization, the amount of data within IT increases. Much of our private life is stored in phones or computers, such as pictures or personal data. In recent years, the privacy has become more important for each individual and to follow human rights is a must in the present. Digital forensics in its very core is a privacy infringement of the suspect which means that the actors who perform this must be more cautious and consider the privacy of the people involved. The purpose of this work has been to investigate how the actors in the judicial system manage the perspective of privacy during IT forensic investigations and to get a picture of how this is applied in practice. The actors who were interviewed in this qualitative study were prosecutors, judges and defenders. The results show that there are a lot of gray areas and that very personal assessments and interpretations are made by the actors. The law, in theory, explains how certain elements should be done, but when applied in practice, this becomes much more complicated. This results in the actors getting separate opinions of how they should be applied, furthermore who should have access to what and how to handle the privacy.
290

A disciplina, pela legislação processual penal brasileira, da prova pericial relacionada ao crime informático praticado por meio da Internet. / The discipline, by brazilian criminal procedure law, of the expert examination related to computer crime commited through the Internet.

Kerr, Vera Kaiser Sanches 05 July 2011 (has links)
Com o advento e o desenvolvimento da tecnologia da informação e principalmente da Internet, as infrações penais ganharam novo ambiente para sua prática. A vertente inovadora referente a esses ilícitos é o meio digital, também denominado meio eletrônico. Ocorre que, o crime informático praticado por meio da Internet é do tipo que deixa vestígio, sendo obrigatório, para se estabelecer a autoria e materialidade do ato delitivo, o exame do corpo de delito, exame este realizado por meio de perícia em meios computacionais. Embora a prova pericial seja regrada pelo Código de Processo Penal brasileiro, uma vez que se trata de meio de produção de prova típico, este regramento é extremamente genérico, não prevendo, portanto, regramento específico quanto à prova pericial em meios computacionais relacionada ao crime informático praticado por meio da Internet. Desta forma, o presente trabalho objetiva analisar a prova pericial em meios computacionais relacionada ao crime informático praticado por meio da Internet, como meio de produção de prova típico, em função do avanço tecnológico, e discutir a viabilidade de sua disciplina, de forma específica, pela legislação processual penal brasileira. A importância de se ter instrumentos legais que regulem a matéria, justifica-se não somente quanto às investigações em âmbito nacional, mas também, em âmbito internacional, o que facilitará a adesão do Brasil a Tratados e Convenções Internacionais que regulam investigações conjuntas entre Estados soberanos, visto que o crime informático praticado por meio da Internet, na maioria dos casos, tem caráter transnacional. / With the advent and development of information technology and especially of the Internet, criminal offenses have gained a new practice environment. The innovative aspect related to such illicits is the digital media, also called electronic media. As it happens, computer crime committed through the Internet is the type that leaves evidence, being that it is mandatory, to establish the authorship and materiality of the criminal act, the examination of the corpus delicti. This examination is performed by experts in computational forensics. Although the expert examination is ruled by the Brazilian Code of Criminal Procedure, since it is a typical means of generating evidence, this procedure is extremely generic and does not foresee, therefore, specific procedures about expert examination in computer crime committed through the Internet. Thus, the present work aims to examine the expert evidence on computer media, computer-crime crime committed through the Internet as a means of typical evidence, as a function of technological progress, and discuss the viability of its discipline, specifically, by the Brazilian criminal procedure law. The importance of having legal instruments governing the subject is justified not only when investigations run at national level but also internationally, what will facilitate the adherence of Brazil to international treaties and conventions governing joint investigations between sovereign states, considering that computer crime committed through the Internet, in most cases, has a transnational nature.

Page generated in 0.0652 seconds