• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 3
  • 2
  • Tagged with
  • 12
  • 12
  • 9
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Methods for optimization of the signature-based radiation scanning approach for detection of nitrogen-rich explosives

Callender, Kennard January 1900 (has links)
Doctor of Philosophy / Department of Mechanical and Nuclear Engineering / William L. Dunn / The signature-based radiation scanning (SBRS) technique can be used to rapidly detect nitrogen-rich explosives at standoff distances. This technique uses a template-matching procedure that produces a figure-of-merit (FOM) whose value is used to distinguish between inert and explosive materials. The present study develops a tiered-filter implementation of the signature-based radiation scanning technique, which reduces the number of templates needed. This approach starts by calculating a normalized FOM between signatures from an unknown target and an explosive template through stages or tiers (nitrogen first, then oxygen, then carbon, and finally hydrogen). If the normalized FOM is greater than a specified cut-off value for any of the tiers, the target signatures are considered not to match that specific template and the process is repeated for the next explosive template until all of the relevant templates have been considered. If a target’s signatures match all the tiers of a single template, then the target is assumed to contain an explosive. The tiered filter approach uses eight elements to construct artificial explosive-templates that have the function of representing explosives cluttered with real materials. The feasibility of the artificial template approach to systematically build a library of templates that successfully differentiates explosive targets from inert ones in the presence of clutter and under different geometric configurations was explored. In total, 10 different geometric configurations were simulated and analyzed using the MCNP5 code. For each configuration, 51 different inert materials were used as inert samples and as clutter in front of the explosive cyclonite (RDX). The geometric configurations consisted of different explosive volumes, clutter thicknesses, and distances of the clutter from the neutron source. Additionally, an objective function was developed to optimize the parameters that maximize the sensitivity and specificity of the method.
2

Redução do esforço do usuário na configuração da deduplicação de grandes bases de dados / Reducing the user effort to tune large scale deduplication

Dal Bianco, Guilherme January 2014 (has links)
A deduplicação consiste na tarefa de identificar quais objetos (registros, documentos, textos, etc.) são potencialmente os mesmos em uma base de dados (ou em um conjunto de bases de dados). A identificação de dados duplicados depende da intervenção do usuário, principalmente para a criação de um conjunto contendo pares duplicados e não duplicados. Tais informações são usadas para ajudar na identificação de outros possíveis pares duplicados presentes na base de dados. Em geral, quando a deduplicação é estendida para grandes conjuntos de dados, a eficiência e a qualidade das duplicatas dependem diretamente do “ajuste” de um usuário especialista. Nesse cenário, a configuração das principais etapas da deduplicação (etapas de blocagem e classificação) demandam que o usuário seja responsável pela tarefa pouco intuitiva de definir valores de limiares e, em alguns casos, fornecer pares manualmente rotulados. Desse modo, o processo de calibração exige que o usuário detenha um conhecimento prévio sobre as características específicas da base de dados e os detalhes do funcionamento do método de deduplicação. O objetivo principal desta tese é tratar do problema da configuração da deduplicação de grandes bases de dados, de modo a reduzir o esforço do usuário. O usuário deve ser somente requisitado para rotular um conjunto reduzido de pares automaticamente selecionados. Para isso, é proposta uma metodologia, chamada FS-Dedup, que incorpora algoritmos do estado da arte da deduplicação para permitir o processamento de grandes volumes de dados e adiciona um conjunto de estratégias com intuito de possibilitar a definição dos parâmetros do deduplicador, removendo os detalhes de configuração da responsabilidade do usuário. A metodologia pode ser vista como uma camada capaz de identificar as informações requisitadas pelo deduplicador (principalmente valores de limiares) a partir de um conjunto de pares rotulados pelo usuário. A tese propõe também uma abordagem que trata do problema da seleção dos pares informativos para a criação de um conjunto de treinamento reduzido. O desafio maior é selecionar um conjunto reduzido de pares suficientemente informativo para possibilitar a configuração da deduplicação com uma alta eficácia. Para isso, são incorporadas estratégias para reduzir o volume de pares candidatos a um algoritmo de aprendizagem ativa. Tal abordagem é integrada à metodologia FS-Dedup para possibilitar a remoção da intervenção especialista nas principais etapas da deduplicação. Por fim, um conjunto exaustivo de experimentos é executado com objetivo de validar as ideias propostas. Especificamente, são demonstrados os promissores resultados alcançados nos experimentos em bases de dados reais e sintéticas, com intuito de reduzir o número de pares manualmente rotulados, sem causar perdas na qualidade da deduplicação. / Deduplication is the task of identifying which objects (e.g., records, texts, documents, etc.) are potentially the same in a given dataset (or datasets). It usually requires user intervention in several stages of the process, mainly to ensure that pairs representing matchings and non-matchings can be determined. This information can be used to help detect other potential duplicate records. When deduplication is applied to very large datasets, the matching quality depends on expert users. The expert users are requested to define threshold values and produce a training set. This intervention requires user knowledge of the noise level of the data and a particular approach to deduplication so that it can be applied to configure the most important stages of the process (e.g. blocking and classification). The main aim of this thesis is to provide solutions to help in tuning the deduplication process in large datasets with a reduced effort from the user, who is only required to label an automatically selected subset of pairs. To achieve this, we propose a methodology, called FS-Dedup, which incorporates state-of-the-art algorithms in its deduplication core to address high performance issues. Following this, a set of strategies is proposed to assist in setting its parameters, and removing most of the detailed configuration concerns from the user. The methodology proposed can be regarded as a layer that is able to identify the specific information requested in the deduplication approach (mainly, threshold values) through pairs that are manually labeled by the user. Moreover, this thesis proposed an approach which would enable to select an informative set of pairs to produce a reduced training set. The main challenge here is how to select a “representative” set of pairs to configure the deduplication with high matching quality. In this context, the proposed approach incorporates an active learning method with strategies that allow the deduplication to be carried out on large datasets. This approach is integrated with the FS-Dedup methodology to avoid the need for a definition of threshold values in the most important deduplication stages. Finally, exhaustive experiments using both synthetic and real datasets have been conducted to validate the ideas outlined in this thesis. In particular, we demonstrate the ability of our approach to reduce the user effort without degrading the matching quality.
3

Redução do esforço do usuário na configuração da deduplicação de grandes bases de dados / Reducing the user effort to tune large scale deduplication

Dal Bianco, Guilherme January 2014 (has links)
A deduplicação consiste na tarefa de identificar quais objetos (registros, documentos, textos, etc.) são potencialmente os mesmos em uma base de dados (ou em um conjunto de bases de dados). A identificação de dados duplicados depende da intervenção do usuário, principalmente para a criação de um conjunto contendo pares duplicados e não duplicados. Tais informações são usadas para ajudar na identificação de outros possíveis pares duplicados presentes na base de dados. Em geral, quando a deduplicação é estendida para grandes conjuntos de dados, a eficiência e a qualidade das duplicatas dependem diretamente do “ajuste” de um usuário especialista. Nesse cenário, a configuração das principais etapas da deduplicação (etapas de blocagem e classificação) demandam que o usuário seja responsável pela tarefa pouco intuitiva de definir valores de limiares e, em alguns casos, fornecer pares manualmente rotulados. Desse modo, o processo de calibração exige que o usuário detenha um conhecimento prévio sobre as características específicas da base de dados e os detalhes do funcionamento do método de deduplicação. O objetivo principal desta tese é tratar do problema da configuração da deduplicação de grandes bases de dados, de modo a reduzir o esforço do usuário. O usuário deve ser somente requisitado para rotular um conjunto reduzido de pares automaticamente selecionados. Para isso, é proposta uma metodologia, chamada FS-Dedup, que incorpora algoritmos do estado da arte da deduplicação para permitir o processamento de grandes volumes de dados e adiciona um conjunto de estratégias com intuito de possibilitar a definição dos parâmetros do deduplicador, removendo os detalhes de configuração da responsabilidade do usuário. A metodologia pode ser vista como uma camada capaz de identificar as informações requisitadas pelo deduplicador (principalmente valores de limiares) a partir de um conjunto de pares rotulados pelo usuário. A tese propõe também uma abordagem que trata do problema da seleção dos pares informativos para a criação de um conjunto de treinamento reduzido. O desafio maior é selecionar um conjunto reduzido de pares suficientemente informativo para possibilitar a configuração da deduplicação com uma alta eficácia. Para isso, são incorporadas estratégias para reduzir o volume de pares candidatos a um algoritmo de aprendizagem ativa. Tal abordagem é integrada à metodologia FS-Dedup para possibilitar a remoção da intervenção especialista nas principais etapas da deduplicação. Por fim, um conjunto exaustivo de experimentos é executado com objetivo de validar as ideias propostas. Especificamente, são demonstrados os promissores resultados alcançados nos experimentos em bases de dados reais e sintéticas, com intuito de reduzir o número de pares manualmente rotulados, sem causar perdas na qualidade da deduplicação. / Deduplication is the task of identifying which objects (e.g., records, texts, documents, etc.) are potentially the same in a given dataset (or datasets). It usually requires user intervention in several stages of the process, mainly to ensure that pairs representing matchings and non-matchings can be determined. This information can be used to help detect other potential duplicate records. When deduplication is applied to very large datasets, the matching quality depends on expert users. The expert users are requested to define threshold values and produce a training set. This intervention requires user knowledge of the noise level of the data and a particular approach to deduplication so that it can be applied to configure the most important stages of the process (e.g. blocking and classification). The main aim of this thesis is to provide solutions to help in tuning the deduplication process in large datasets with a reduced effort from the user, who is only required to label an automatically selected subset of pairs. To achieve this, we propose a methodology, called FS-Dedup, which incorporates state-of-the-art algorithms in its deduplication core to address high performance issues. Following this, a set of strategies is proposed to assist in setting its parameters, and removing most of the detailed configuration concerns from the user. The methodology proposed can be regarded as a layer that is able to identify the specific information requested in the deduplication approach (mainly, threshold values) through pairs that are manually labeled by the user. Moreover, this thesis proposed an approach which would enable to select an informative set of pairs to produce a reduced training set. The main challenge here is how to select a “representative” set of pairs to configure the deduplication with high matching quality. In this context, the proposed approach incorporates an active learning method with strategies that allow the deduplication to be carried out on large datasets. This approach is integrated with the FS-Dedup methodology to avoid the need for a definition of threshold values in the most important deduplication stages. Finally, exhaustive experiments using both synthetic and real datasets have been conducted to validate the ideas outlined in this thesis. In particular, we demonstrate the ability of our approach to reduce the user effort without degrading the matching quality.
4

Redução do esforço do usuário na configuração da deduplicação de grandes bases de dados / Reducing the user effort to tune large scale deduplication

Dal Bianco, Guilherme January 2014 (has links)
A deduplicação consiste na tarefa de identificar quais objetos (registros, documentos, textos, etc.) são potencialmente os mesmos em uma base de dados (ou em um conjunto de bases de dados). A identificação de dados duplicados depende da intervenção do usuário, principalmente para a criação de um conjunto contendo pares duplicados e não duplicados. Tais informações são usadas para ajudar na identificação de outros possíveis pares duplicados presentes na base de dados. Em geral, quando a deduplicação é estendida para grandes conjuntos de dados, a eficiência e a qualidade das duplicatas dependem diretamente do “ajuste” de um usuário especialista. Nesse cenário, a configuração das principais etapas da deduplicação (etapas de blocagem e classificação) demandam que o usuário seja responsável pela tarefa pouco intuitiva de definir valores de limiares e, em alguns casos, fornecer pares manualmente rotulados. Desse modo, o processo de calibração exige que o usuário detenha um conhecimento prévio sobre as características específicas da base de dados e os detalhes do funcionamento do método de deduplicação. O objetivo principal desta tese é tratar do problema da configuração da deduplicação de grandes bases de dados, de modo a reduzir o esforço do usuário. O usuário deve ser somente requisitado para rotular um conjunto reduzido de pares automaticamente selecionados. Para isso, é proposta uma metodologia, chamada FS-Dedup, que incorpora algoritmos do estado da arte da deduplicação para permitir o processamento de grandes volumes de dados e adiciona um conjunto de estratégias com intuito de possibilitar a definição dos parâmetros do deduplicador, removendo os detalhes de configuração da responsabilidade do usuário. A metodologia pode ser vista como uma camada capaz de identificar as informações requisitadas pelo deduplicador (principalmente valores de limiares) a partir de um conjunto de pares rotulados pelo usuário. A tese propõe também uma abordagem que trata do problema da seleção dos pares informativos para a criação de um conjunto de treinamento reduzido. O desafio maior é selecionar um conjunto reduzido de pares suficientemente informativo para possibilitar a configuração da deduplicação com uma alta eficácia. Para isso, são incorporadas estratégias para reduzir o volume de pares candidatos a um algoritmo de aprendizagem ativa. Tal abordagem é integrada à metodologia FS-Dedup para possibilitar a remoção da intervenção especialista nas principais etapas da deduplicação. Por fim, um conjunto exaustivo de experimentos é executado com objetivo de validar as ideias propostas. Especificamente, são demonstrados os promissores resultados alcançados nos experimentos em bases de dados reais e sintéticas, com intuito de reduzir o número de pares manualmente rotulados, sem causar perdas na qualidade da deduplicação. / Deduplication is the task of identifying which objects (e.g., records, texts, documents, etc.) are potentially the same in a given dataset (or datasets). It usually requires user intervention in several stages of the process, mainly to ensure that pairs representing matchings and non-matchings can be determined. This information can be used to help detect other potential duplicate records. When deduplication is applied to very large datasets, the matching quality depends on expert users. The expert users are requested to define threshold values and produce a training set. This intervention requires user knowledge of the noise level of the data and a particular approach to deduplication so that it can be applied to configure the most important stages of the process (e.g. blocking and classification). The main aim of this thesis is to provide solutions to help in tuning the deduplication process in large datasets with a reduced effort from the user, who is only required to label an automatically selected subset of pairs. To achieve this, we propose a methodology, called FS-Dedup, which incorporates state-of-the-art algorithms in its deduplication core to address high performance issues. Following this, a set of strategies is proposed to assist in setting its parameters, and removing most of the detailed configuration concerns from the user. The methodology proposed can be regarded as a layer that is able to identify the specific information requested in the deduplication approach (mainly, threshold values) through pairs that are manually labeled by the user. Moreover, this thesis proposed an approach which would enable to select an informative set of pairs to produce a reduced training set. The main challenge here is how to select a “representative” set of pairs to configure the deduplication with high matching quality. In this context, the proposed approach incorporates an active learning method with strategies that allow the deduplication to be carried out on large datasets. This approach is integrated with the FS-Dedup methodology to avoid the need for a definition of threshold values in the most important deduplication stages. Finally, exhaustive experiments using both synthetic and real datasets have been conducted to validate the ideas outlined in this thesis. In particular, we demonstrate the ability of our approach to reduce the user effort without degrading the matching quality.
5

An MCNP study of fast neutron interrogation for standoff detection of improvised explosive devices

Heider, Samuel A. January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / William L. Dunn / The signature-based radiation-scanning (SBRS) technique relies on radiation detector responses, called “signatures,” and compares them to “templates”, to differentiate targets containing nitrogen-rich explosives from those that do not. This investigation utilizes nine signatures due to inelastic-scatter and prompt-capture gamma rays from hydrogen, carbon, nitrogen, and oxygen (HCNO) as well as two neutron signatures, produced when a target is interrogated with a 14.1 MeV neutron source beam. One hundred and forty three simulated experiments were conducted using MCNP5. Signatures of 42 targets containing explosive samples (21 of RDX and 21 of Urea Nitrate), and 21 containing inert samples were compared with the signatures of 80 artificial templates through figure-of-merit analysis. A density filter, comparing targets with templates of similar average density was investigated. Both high and low-density explosives (RDX-1.8 g cm-3 and Urea Nitrate-0.69 g cm-3) were shown to be differentiated from inert materials through use of neutron and gamma-ray signature templates with sensitivity of 90.5% and specificity of 76.2%. Density Groups were identified, in which neutron signature templates, gamma-ray signature templates or the combination of neutron and gamma-ray signature templates were capable of improving inert-explosive differentiation. figure-of -merit analysis, employing the best Density Group specific templates, differentiated explosive from inert targets with 90.5% sensitivity and specificity of over 85%.
6

Signature-based activity detection based on Bayesian networks acquired from expert knowledge

Fooladvandi, Farzad January 2008 (has links)
<p>The maritime industry is experiencing one of its longest and fastest periods of growth. Hence, the global maritime surveillance capacity is in a great need of growth as well. The detection of vessel activity is an important objective of the civil security domain. Detecting vessel activity may become problematic if audit data is uncertain. This thesis aims to investigate if Bayesian networks acquired from expert knowledge can detect activities with a signature-based detection approach. For this, a maritime pilot-boat scenario has been identified with a domain expert. Each of the scenario’s activities has been divided up into signatures where each signature relates to a specific Bayesian network information node. The signatures were implemented to find evidences for the Bayesian network information nodes. AIS-data with real world observations have been used for testing, which have shown that it is possible to detect the maritime pilot-boat scenario based on the taken approach.</p>
7

Signature-based activity detection based on Bayesian networks acquired from expert knowledge

Fooladvandi, Farzad January 2008 (has links)
The maritime industry is experiencing one of its longest and fastest periods of growth. Hence, the global maritime surveillance capacity is in a great need of growth as well. The detection of vessel activity is an important objective of the civil security domain. Detecting vessel activity may become problematic if audit data is uncertain. This thesis aims to investigate if Bayesian networks acquired from expert knowledge can detect activities with a signature-based detection approach. For this, a maritime pilot-boat scenario has been identified with a domain expert. Each of the scenario’s activities has been divided up into signatures where each signature relates to a specific Bayesian network information node. The signatures were implemented to find evidences for the Bayesian network information nodes. AIS-data with real world observations have been used for testing, which have shown that it is possible to detect the maritime pilot-boat scenario based on the taken approach.
8

Leveraging PLC Ladder Logic for Signature Based IDS Rule Generation

Richey, Drew Jackson 12 August 2016 (has links)
Industrial Control Systems (ICS) play a critical part in our world’s economy, supply chain and critical infrastructure. Securing the various types of ICS is of the utmost importance and has been a focus of much research for the last several years. At the heart of many defense in depth strategies is the signature based intrusion detection system (IDS). The signatures that define an IDS determine the effectiveness of the system. Existing methods for IDS signature creation do not leverage the information contained within the PLC ladder logic file. The ladder logic file is a rich source of information about the PLC control system. This thesis describes a method for parsing PLC ladder logic to extract address register information, data types and usage that can be used to better define the normal operation of the control system which will allow for rules to be created to detect abnormal activity.
9

Abstracting and correlating heterogeneous events to detect complex scenarios

Panichprecha, Sorot January 2009 (has links)
The research presented in this thesis addresses inherent problems in signaturebased intrusion detection systems (IDSs) operating in heterogeneous environments. The research proposes a solution to address the difficulties associated with multistep attack scenario specification and detection for such environments. The research has focused on two distinct problems: the representation of events derived from heterogeneous sources and multi-step attack specification and detection. The first part of the research investigates the application of an event abstraction model to event logs collected from a heterogeneous environment. The event abstraction model comprises a hierarchy of events derived from different log sources such as system audit data, application logs, captured network traffic, and intrusion detection system alerts. Unlike existing event abstraction models where low-level information may be discarded during the abstraction process, the event abstraction model presented in this work preserves all low-level information as well as providing high-level information in the form of abstract events. The event abstraction model presented in this work was designed independently of any particular IDS and thus may be used by any IDS, intrusion forensic tools, or monitoring tools. The second part of the research investigates the use of unification for multi-step attack scenario specification and detection. Multi-step attack scenarios are hard to specify and detect as they often involve the correlation of events from multiple sources which may be affected by time uncertainty. The unification algorithm provides a simple and straightforward scenario matching mechanism by using variable instantiation where variables represent events as defined in the event abstraction model. The third part of the research looks into the solution to address time uncertainty. Clock synchronisation is crucial for detecting multi-step attack scenarios which involve logs from multiple hosts. Issues involving time uncertainty have been largely neglected by intrusion detection research. The system presented in this research introduces two techniques for addressing time uncertainty issues: clock skew compensation and clock drift modelling using linear regression. An off-line IDS prototype for detecting multi-step attacks has been implemented. The prototype comprises two modules: implementation of the abstract event system architecture (AESA) and of the scenario detection module. The scenario detection module implements our signature language developed based on the Python programming language syntax and the unification-based scenario detection engine. The prototype has been evaluated using a publicly available dataset of real attack traffic and event logs and a synthetic dataset. The distinct features of the public dataset are the fact that it contains multi-step attacks which involve multiple hosts with clock skew and clock drift. These features allow us to demonstrate the application and the advantages of the contributions of this research. All instances of multi-step attacks in the dataset have been correctly identified even though there exists a significant clock skew and drift in the dataset. Future work identified by this research would be to develop a refined unification algorithm suitable for processing streams of events to enable an on-line detection. In terms of time uncertainty, identified future work would be to develop mechanisms which allows automatic clock skew and clock drift identification and correction. The immediate application of the research presented in this thesis is the framework of an off-line IDS which processes events from heterogeneous sources using abstraction and which can detect multi-step attack scenarios which may involve time uncertainty.
10

Intrusion Detection systems : A comparison in configuration and implementation between OSSEC and Snort

Stegeby, Peter January 2023 (has links)
Hackare fortsätter att bli bättre på att få otillåten tillgång till våra datorer och kan undvika de mest grundläggande intrångsskyddade system och brandväggar på en standarddator. Då numren av intrång växer varje år och kostar företag miljoner av dollar, så verkar gapet mellan attackerare och försvarare att bli större. Frågan som då kan uppstå är, hur kan vi skydda oss själva? Kunskapen som blivit insamlad i detta arbete pekar tydligt på att det finns saker vi kan göra vilket svarar på frågan, hur kan vi upptäcka intrång? Studien visar att mer avancerade Intrusion Detection System (IDS) kan bli implementerad på hemdatorer (och i företag). Det finns många alternativ att välja mellan, men de valda IDSer – OSSEC och Snort – kan upptäcka säkerhetsbrister på enskilda host-maskiner (eller på nätverket) i realtid tack vare avancerad loggningshanterings och övervakning. Svårighetsgraden av att använda och implementera dessa IDSer var utmanande men tillfredställande och konfigurationen var flexibel vilket tillåter IDSerna att bli installerade på en ensam host-dator eller i ett nätverk. Om ett enkelt-att-följa grafiskt översikt av felmeddelanden är vad man önskar så har OSSEC IDS, tillsammans med att skicka e-mail över felmeddelandet, den funktionaliteten. Snort, på andra sidan, har en enkel konfiguration och flexibilitet i att skriva regler. Det borde framgå tydligt att implementera en IDS på ert system inte gör det ogenomträngligt, inte heller löser det alla säkerhetsrelaterade problem, men det som kommer att hända är att vi får en bättre förståelse av de hot som uppstår i våra system. / Hackers keeps getting better at gaining unauthorized access to our computers and can avoid some of the most basic intrusion detection systems and firewalls on a standard computer. The gap between attackers and defenders seem to grow as intrusions increase in numbers every year, costing companies millions of dollars, so the question is posed, how can we protect ourselves? The research done in this work clearly points to that there are things that can be done which answers the question, how can we detect intrusions? The study has shown that a more advanced intrusion detection system (IDS) can be implemented on home computers (and in businesses). There are many options to choose from but the chosen IDSs – OSSEC and Snort – can detect security issues on the host computer (or on the network) in real-time by advanced logging management and monitoring. The implementation and usage difficulties of these IDSs are challenging but satisfying and the configurations are flexible allowing the IDSs to be installed on a single host or in a larger network. If an easy-to-follow graphical overview of the alerts on your system is what you are looking for then that, and sending e-mails of the alert, is found in the OSSEC IDS. Snort, on the other hand, has easy configurations and flexible rule-writing and the options of sniffing packets on the network. It should be clear that implementing an IDS on your system does not make it impenetrable nor solve all the security issues but what it will do is to give you a better understanding of the threats on your system.

Page generated in 0.0322 seconds