281 |
Improving the precision of an Intrusion Detection System using Indicators of Compromise : - a proof of concept -Lejonqvist, Gisela, Larsson, Oskar January 2018 (has links)
The goal of this research is to improve an IDS so that the percentage of true positives is high, an organisation can cut time and cost and use its resources in a more optimal way. This research goal was to prove that the precision of an intrusion detection system (IDS), in terms of producing lower rate of false positives or higher rate of true alerts, can be achieved by parsing indicators of compromise (IOC) to gather information, that combined with system-specific knowledge will be a solid base for manual fine-tuning of IDS-rules. The methodology used is Design Science Research Methodology (DSRM) because it is used for research that aims to answer an existing problem with a new or improved solution. A part of that solution is a proposed process for tuning of an arbitrary intrusion detection system. The implemented and formalized process Tuned Intrusion Detection System (TIDS) has been designed during this research work, aiding us in presenting and performing validation tests in a structured and robust way. The testbed consisted of a Windows 10 operating system and a NIDS implementation of Snort as an IDS. The work was experimental, evaluated and improved regarding IDS rules and tools over several iterations. With the use of recorded data traffic from the public dataset CTU-13, the difference between the use of tuned versus un-tuned rules in an IDS was presented in terms of precision of the alerts created by the IDS. Our contributions were that the concept holds; the precision can be improved by adding custom rules based on known parameters in the network and features of the network traffic and disabling rules that were out of scope. The second contribution is the TIDS process, as designed during the thesis work, serving us well during the process.
|
282 |
Statistical methods to identify differentially methylated regions using illumina methylation arraysZheng, Yuanchao 08 February 2024 (has links)
DNA methylation is an epigenetic mechanism that usually occurs at CpG sites in the genome. Both sequencing and array-based techniques are available to detect methylation patterns. Whole-genome bisulfite sequencing is the most comprehensive but cost-prohibitive approach, and microarrays represent an affordable alternative approach. Array-based methods are generally cheaper but assess a specific number of genomic loci, such as Illumina methylation arrays. Differentially methylated regions (DMRs) are genomic regions with specific methylation patterns across multiple CpG sites that associate with a phenotype. Methylation at nearby sites tends to be correlated, therefore it may be more powerful to study sets of sites to detect methylation differences as well as reduce the multiple testing burden, compared to utilizing individual sites. Several statistical approaches exist for identifying DMRs, and a few prior publications compared the performance of several commonly used DMR methods. However, as far as we know, no comprehensive comparisons have been made based on genome-wide simulation studies.
This dissertation provides some comprehensive suggestions for DMR analysis based on genome-wide evaluations of existing DMR tools and presents the development of a novel approach to increase the power to identify DMRs with clinical value in genomic research. The second chapter presents genome-wide null simulations to compare five commonly used array-based DMR methods (Bumphunter, comb-p, DMRcate, mCSEA and coMethDMR) and identifies coMethDMR as the only approach that consistently yields appropriate Type I error control. We suggest that a genome-wide evaluation of false positive (FP) rates is critical for DMR methods. The third chapter develops a novel Principal Component Analysis based DMR method (denoted as DMRPC), which demonstrates its ability to identify DMRs using genome-wide methylation arrays with well-controlled FP rates at the level of 0.05. Compared to coMethDMR, DMRPC is a robust and powerful novel DMR tool that can examine more genomic regions and extract signals from low-correlation regions. The fourth chapter applies the new DMR approach DMRPC in two “real-world” datasets and identifies novel DMRs that are associated with several inflammatory markers.
|
283 |
Planning and the Survival Processing Effect: An Examination of the Proximate MechanismsColyn, Leisha A. 09 April 2014 (has links)
No description available.
|
284 |
Feeling is Believing? How emotions influence the effectiveness of political fact-checking messagesWeeks, Brian Edward 14 November 2014 (has links)
No description available.
|
285 |
UPPFATTNINGAR OM BORTTRÄNGDA MINNEN : En enkätundersökning med psykologer och psykologstudenterJutterdal, Johannes, Hagelberg, Elin January 2024 (has links)
Debatten om bortträngda minnen och hur vi inkodar traumatiska minnen har pågått sedan 80-talet. Forskningen på området är splittrad och ofta hörs bara extrema röster. Studier i andra länder har undersökt hur psykologer och psykologstudenter förhåller sig till ämnet men det saknas forskning i Sverige. Syftet med denna studie var att genom en kvantitativ tvärsnittsdesign med en digital enkät undersöka svenska psykologers och psykologstudenters uppfattning av bortträngda minnen. Ett bekvämlighetsurval användes. 201 personer deltog, varav 38% (n=77) studenter, 6% (n=13) PTP-psykologer och 55% (n=111) legitimerade psykologer. Deltagarna var mellan 20 och 79 år gamla. Könsfördelningen var följande: 79% (n=159) kvinnor, 19% (n=39) män och 1% (n=3) ickebinära/annat. T-tester, en envägs-Anovor, två mixed-ANOVA genomfördes för att analysera skillnader i svar beroende på teoretisk inriktning, mängd erfarenhet av arbete med trauma respektive student eller psykolog. Resultatet visade att majoriteten av psykologer och psykologstudenter ansåg att minnen av traumatiska händelser kan vara otillgängliga under en längre tid. Uppfattningen påverkades inte av teoretisk inriktning eller utbildningsnivå. Däremot påverkades den av mängden erfarenhet av att arbeta med trauma. Vidare visade resultaten att deltagarna uppfattar minnen av traumatiska händelser annorlunda jämfört med andra minnen. Samtidigt är ämnet komplicerat och det kan vara problematiskt att uttala sig om med hjälp av en enkät. Studien visar att delade åsikter finns även inom den svenska psykologkåren och att debatten om bortträngda minnen fortlever. Slutsatser av resultatet diskuteras. / The debate regarding repressed memories and how memory of trauma is encoded has been active since the 80’s. The research is divided, and it is often the extreme views that takes up space in the debate. Research regarding psychologist and psychology students opinions on the subject has been conducted but research on Swedish psychologists opinions is lacking. The purpose of this study was to examine Swedish psychologists and psychology students opinions using a quantitative cross-sectional design with a digital survey. A convenience selection was used. 201 people participated in the study, 38% (n=77) psychology students, 6% (n=3) psychologists in training and 55% (n=111) psychologists. The participants were between 20 and 79 years of age. The gender distribution was as follows: 79% (n=159) women, 19% (n=39) men and 1% (n=3) non binary/other. T-tests, one-way ANOVA as well as two mixed ANOVA were used to analyze depending on theoretical orientation, amount of experience working with trauma and whether the participant were a student or psychologist. The result showed that a majority of psychologists and psychology students believed that memories of traumatic events can be inaccessible for long periods of time. This opinion was not influenced by theoretical orientation or level of education. It was, however, influenced by the amount of experience the participants had working with trauma. Furthermore, the results showed that participants perceive memories of traumatic events differently compared to other memories. However, the topic is complex and difficult to examine using only a survey. The study shows that there are divided opinions within the Swedish psychology profession and that the debate continues. Conclusions of the results are discussed.
|
286 |
New Results on the False Discovery RateLiu, Fang January 2010 (has links)
The false discovery rate (FDR) introduced by Benjamini and Hochberg (1995) is perhaps the most standard error controlling measure being used in a wide variety of applications involving multiple hypothesis testing. There are two approaches to control the FDR - the fixed error rate approach of Benjamini and Hochberg (BH, 1995) where a rejection region is determined with the FDR below a fixed level and the estimation based approach of Storey (2002) where the FDR is estimated for a fixed rejection region before it is controlled. In this proposal, we concentrate on both these approaches and propose new, improved versions of some FDR controlling procedures available in the literature. A number of adaptive procedures have been put forward in the literature, each attempting to improve the method of Benjamini and Hochberg (1995), the BH method, by incorporating into this method an estimate of number true null hypotheses. Among these, the method of Benjamini, Krieger and Yekutieli (2006), the BKY method, has been receiving lots of attention recently. In this proposal, a variant of the BKY method is proposed by considering a different estimate of number true null hypotheses, which often outperforms the BKY method in terms of the FDR control and power. Storey's (2002) estimation based approach to controlling the FDR has been developed from a class of conservatively biased point estimates of the FDR under a mixture model for the underlying p-values and a fixed rejection threshold for each null hypothesis. An alternative class of point estimates of the FDR with uniformly smaller conservative bias is proposed under the same setup. Numerical evidence is provided to show that the mean squared error (MSE) is also often smaller for this new class of estimates. Compared to Storey's (2002), the present class provides a more powerful estimation based approach to controlling the FDR. / Statistics
|
287 |
ROBUST ESTIMATION OF THE PARAMETERS OF g - and - h DISTRIBUTIONS, WITH APPLICATIONS TO OUTLIER DETECTIONXu, Yihuan January 2014 (has links)
The g - and - h distributional family is generated from a relatively simple transformation of the standard normal. By changing the skewness and elongation parameters g and h, this distributional family can approximate a broad spectrum of commonly used distributional shapes, such as normal, lognormal, Weibull and exponential. Consequently, it is easy to use in simulation studies and has been applied in multiple areas, including risk management, stock return analysis and missing data imputation studies. The current available methods to estimate the g - and - h distributional family include: letter value based method (LV), numerical maximal likelihood method (NMLE), and moment methods. Although these methods work well when no outliers or contaminations exist, they are not resistant to a moderate amount of contaminated observations or outliers. Meanwhile, NMLE is a computational time consuming method when data sample size is large. In this dissertation a quantile based least squares (QLS) estimation method is proposed to fit the g - and - h distributional family parameters and then derive its basic properties. Then QLS method is extended to a robust version (rQLS). Simulation studies are performed to compare the performance of QLS and rQLS methods with LV and NMLE methods to estimate the g - and - h parameters from random samples with or without outliers. In random samples without outliers, QLS and rQLS estimates are comparable to LV and NMLE in terms of bias and standard error. On the other hand, rQLS performs better than other non-robust method to estimate the g - and - h parameters when moderate amount of contaminated observations or outliers exist. The flexibility of the g - and - h distribution and the robustness of rQLS method make it a useful tool in various fields. The boxplot (BP) method had been used in multiple outlier detections by controlling the some-outside rate, which is the probability of one or more observations, in an outlier-free sample, falling into the outlier region. The BP method is distribution dependent. Usually the random sample is assumed normally distributed; however, this assumption may not be valid in many applications. The robustly estimated g - and - h distribution provides an alternative approach without distributional assumptions. Simulation studies indicate that the BP method based on robustly estimated g - and - h distribution identified reasonable number of true outliers while controlling number of false outliers and some-outside rate compared to normal distributional assumption when it is not valid. Another application of the robust g - and - h distribution is as an empirical null distribution in false discovery rate method (denoted as BH method thereafter). The performance of BH method depends on the accuracy of the null distribution. It has been found that theoretical null distributions were often not valid when simultaneously performing many thousands, even millions, of hypothesis tests. Therefore, an empirical null distribution approach is introduced that uses estimated distribution from the data. This is recommended as a substitute to the currently used empirical null methods of fitting a normal distribution or another member of the exponential family. Similar to BP outlier detection method, the robustly estimated g - and - h distribution can be used as empirical null distribution without any distributional assumptions. Several real data examples of microarray are used as illustrations. The QLS and rQLS methods are useful tools to estimate g - and - h parameters, especially rQLS because it noticeably reduces the effect of outliers on the estimates. The robustly estimated g - and - h distributions have multiple applications where distributional assumptions are required, such as boxplot outlier detection or BH methods. / Statistics
|
288 |
Intentionally fabricated autobiographical memoriesJustice, L.V., Morrison, Catriona M., Conway, M.A. 28 October 2016 (has links)
Yes / Participants generated both autobiographical memories (AMs) that they believed to be true and intentionally fabricated autobiographical memories (IFAMs). Memories were constructed while a concurrent memory load (random 8-digit sequence) was held in mind or while there was no concurrent load. Amount and accuracy of recall of the concurrent memory load was reliably poorer following generation of IFAMs than following generation of AMs. There was no reliable effect of load on memory generation times; however, IFAMs always took longer to construct than AMs. Finally, replicating previous findings, fewer IFAMs had a field perspective than AMs, IFAMs were less vivid than AMs, and IFAMs contained more motion words (indicative of increased cognitive load). Taken together, these findings show a pattern of systematic differences that mark out IFAMs, and they also show that IFAMs can be identified indirectly by lowered performance on concurrent tasks that increase cognitive load.
|
289 |
Modelling the executive components involved in processing false belief and mechanical/intentional sequencesTsuji, H., Mitchell, Peter 04 June 2020 (has links)
Yes / To understand the executive demands of the false-belief (FB) task relative to an alternative theory-of-mind (or mechanical causality) task, picture sequencing, the present study used path analyses. One hundred and sixty-six children between 3 and 6 years old completed the FB and picture-sequencing tasks, three executive function tasks (updating, inhibition, and shifting), and the receptive language test. The model with the best fit indicated that FB performance had a direct contribution from shifting of attention and inhibitory control, which was independent of the significant contribution made by picture sequencing. This model indicates that FB inference requires more executive processing than picture sequencing, which is used as an alternative task to measure theory of mind. Statement of contribution What is already known on this subject? The majority of researchers use the false-belief task to assess mentalizing ability in young children. Sources of information used in various different mentalizing tasks require different levels of cognitive demand. Many executive functions (EFs) are involved in children's judgements of false belief. What does this study add? A statistical model was created to compare processing requirements of false-belief and picture-sequencing tasks. The model supported the claim that the false-belief task involves considerably more than just mentalizing. Shifting the focus of attention was an EF that was found to be a key component of performance in the false-belief task. / Japanese Society for the Promotion of Science: KAKENHI Grant No. 16K04327.
|
290 |
False memory production: effects of self-consistent false information and motivated cognitionBrown, Martha 06 June 2008 (has links)
Remembrance of one's personal past and the development of false memories have recently received intense public scrutiny. Based upon self-schema (Markus, 1977) and self-verification (Swann, 1987) theories, two studies were conducted to investigate the hypothesis that a self-schema guides cognitive processing of self-relevant information and thereby influences the construction of a memory that includes false information, particularly more so if this information is self-schema consistent than inconsistent. Study 2 also investigated the hypothesis that the cognitive processing goal of understanding a negative outcome (motivated cognition) would interact with self-consistent expectations to enhance the likelihood that a false memory would be created. Self-schematic Type A and Type B individuals (only self-schematic Type A individuals participated in Study 2) participated in a team problem solving task (the to-be-remembered event) and returned a week later for a "questionnaire" session during which a narrative was read that contained self-consistent or self-discrepant false information. In both studies, chi-square analyses showed participants given self-consistent false information were more likely to report this information on a recall and a recognition test than were participants given self-discrepant false information.
Study 2 included team performance feedback (failure or neutral), which was presented just before participants read the narrative containing the false information. The purpose of this procedure was to assess the moderating effect of motivated cognitive processes on the acceptance of self-consistent false information on memory. A loglinear analysis provided confirmation for the expected interaction. The following pattern was obtained for false recall and false self-description (description of team problem solving behavior using the false information trait adjectives): Consistent/failure > Consistent/neutral > Discrepant/neutral = Discrepant/failure. Unexpectedly, this pattern was not obtained on the recognition test data.
These findings expand current understanding of processes that contribute to the production of a false memory and extend the traditional, post event false information paradigm. The results are discussed in the context of the false memory debate and future research directions are noted. / Ph. D.
|
Page generated in 0.0516 seconds