Machine learning models are vulnerable to adversarial attacks that add perturbations to the input data. Here we model and simulate power flow in a power grid test case and generate adversarial attacks for these measurements in three different ways. This is to compare the effect of attacks of different sizes constructed using various levels of knowledge of the model to see how this affects how often the attacks are detected. The three methods being one where the attacker has full knowledge of model, one where the attacker only has access to the measurements of the model, and the third method where the attacker has no knowledge of the model. By comparing these methods through how often they are detected by a residual-based detection scheme, one can argue that a data-driven attack only knowing the measurements is enough to add an error without being detected by the detection scheme. Using a linearized version of a state estimation is shown to be insufficient for generating attacks with full knowledge of the system, and further research is needed to compare the performance of the full knowledge attacks and the data-driven attacks. The attacks generated without knowledge of the system perform poorly and are easily detected.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-479474 |
Date | January 2022 |
Creators | Larsson, Oscar |
Publisher | Uppsala universitet, Avdelningen för systemteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf, application/zip |
Rights | info:eu-repo/semantics/openAccess, info:eu-repo/semantics/openAccess |
Relation | UPTEC F, 1401-5757 ; 22048 |
Page generated in 0.0019 seconds