Spelling suggestions: "subject:"attribution analysis"" "subject:"ttribution analysis""
1 |
Comparison of Antibiotic Sensitivity Profiles, Molecular Typing Patterns, and Attribution of Salmonella Enterica Serotype Newport in the U.S., 2003-2006Patel, Nehal Jitendralal 26 July 2007 (has links)
Salmonella causes gastrointestinal illness in humans. The purpose of the study was to determine the relative contribution of different food commodities to sporadic cases of salmonellosis (attribution analysis) caused by Salmonella Newport (SN) using Pulsed-Field Gel Electrophoresis (PFGE) patterns and antimicrobial sensitivity (AST) data submitted by public health laboratories and regulatory agencies from 2003 to 2006. The genetic relationship between isolates from non-human (348) and human (10,848) sources was studied by two unique clustering methods: UPGMA and Ward. Results show poultry was the highest contributor of human SN infections, followed by tomatoes and beef. Beef was the largest contributing food commodity of multi-drug resistant (MDR)-AmpC infection patterns. Results from this pilot study show that PFGE and AST can be useful tools in performing attribution analysis at the national level and that SN MDR-AmpC patterns are decreasing and seem to be restricted to isolates from animal sources.
|
2 |
Improving the Robustness of Neural Networks to Adversarial Patch Attacks Using Masking and Attribution AnalysisMahalder, Atandra 01 January 2024 (has links) (PDF)
Computer vision algorithms, including image classifiers and object detectors, play a pivotal role in various cyber-physical systems, spanning from facial recognition to self-driving vehicles and security surveillance. However, the emergence of real-world adversarial patches, which can be as simple as stickers, poses a significant threat to the reliability of AI models utilized within these systems. To address this challenge, several defense mechanisms such as PatchGuard, Minority Report, and (De)Randomized Smoothing have been proposed to enhance the resilience of AI models against such attacks. In this thesis, we introduce a novel framework that integrates masking with attribution analysis to robustify AI models against adversarial patch assaults. Attribution analysis identifies the crucial pixels influencing the model's decision-making process. Subsequently, inspired by the Derandomized Smoothing defense strategy, we employ a masking approach to mask these important pixels. Our experimental findings demonstrate improved robustness against adversarial attacks, at the expense of a slight degradation in clean accuracy.
|
Page generated in 0.0981 seconds