Spelling suggestions: "subject:"anomaly"" "subject:"unomaly""
381 |
Anomaly Detection for Control CentersGyamfi, Cliff Oduro 06 1900 (has links)
The control center is a critical location in the power system infrastructure. Decisions regarding the power system’s operation and control are often made from the control center. These control actions are made possible through SCADA communication. This capability however makes the power system vulnerable to cyber attacks. Most of the decisions taken by the control center dwell on the measurement data received from substations. These measurements estimate the state of the power grid. Measurement-based cyber attacks have been well studied to be a major threat to control center operations. Stealthy false data injection attacks are known to evade bad data detection. Due to the limitations with bad data detection at the control center, a lot of approaches have been explored especially in the cyber layer to detect measurement-based attacks. Though helpful, these approaches do not look at the physical layer. This study proposes an anomaly detection system for the control center that operates on the laws of physics. The system also identifies the specific falsified measurement and proposes its estimated measurement value. / United States Department of Energy (DOE)
National Renewable Energy Laboratory (NREL) / Master of Science / Electricity is an essential need for human life. The power grid is one of the most important human inventions that fueled other technological innovations in the industrial revolution. Changing demands in usage have added to its operational complexity. Several modifications have been made to the power grid since its invention to make it robust and operationally safe. Integration of ICT has significantly improved the monitoring and operability of the power grid. Improvements through ICT have also exposed the power grid to cyber vulnerabilities. Since the power system is a critical infrastructure, there is a growing need to keep it secure and operable for the long run. The control center of the power system serves mainly as the decision-making hub of the grid. It operates through a communication link with the various dispersed devices and substations on the grid. This interconnection makes remote control and monitoring decisions possible from the control center. Data from the substations through the control center are also used in electricity markets and economic dispatch. The control center is however susceptible to cyber-attacks, particularly measurement-based attacks. When attackers launch measurement attacks, their goal is to force control actions from the control center that can make the system unstable. They make use of the vulnerabilities in the cyber layer to launch these attacks. They can inject falsified data packets through this link to usurp correct ones upon arrival at the control center. This study looks at an anomaly detection system that can detect falsified measurements at the control center. It will also indicate the specific falsified measurements and provide an estimated value for further analysis.
|
382 |
Network Anomaly Detection with Incomplete Audit DataPatcha, Animesh 04 October 2006 (has links)
With the ever increasing deployment and usage of gigabit networks, traditional network anomaly detection based intrusion detection systems have not scaled accordingly. Most, if not all, systems deployed assume the availability of complete and clean data for the purpose of intrusion detection. We contend that this assumption is not valid. Factors like noise in the audit data, mobility of the nodes, and the large amount of data generated by the network make it difficult to build a normal traffic profile of the network for the purpose of anomaly detection.
From this perspective, the leitmotif of the research effort described in this dissertation is the design of a novel intrusion detection system that has the capability to detect intrusions with high accuracy even when complete audit data is not available. In this dissertation, we take a holistic approach to anomaly detection to address the threats posed by network based denial-of-service attacks by proposing improvements in every step of the intrusion detection process. At the data collection phase, we have implemented an adaptive sampling scheme that intelligently samples incoming network data to reduce the volume of traffic sampled, while maintaining the intrinsic characteristics of the network traffic. A Bloom filters based fast flow aggregation scheme is employed at the data pre-processing stage to further reduce the response time of the anomaly detection scheme. Lastly, this dissertation also proposes an expectation-maximization algorithm based anomaly detection scheme that uses the sampled audit data to detect intrusions in the incoming network traffic. / Ph. D.
|
383 |
Evaluation of Scan Methods Used in the Monitoring of Public Health Surveillance DataFraker, Shannon E. 07 December 2007 (has links)
With the recent increase in the threat of biological terrorism as well as the continual risk of other diseases, the research in public health surveillance and disease monitoring has grown tremendously. There is an abundance of data available in all sorts of forms. Hospitals, federal and local governments, and industries are all collecting data and developing new methods to be used in the detection of anomalies. Many of these methods are developed, applied to a real data set, and incorporated into software. This research, however, takes a different view of the evaluation of these methods.
We feel that there needs to be solid statistical evaluation of proposed methods no matter the intended area of application. Using proof-by-example does not seem reasonable as the sole evaluation criteria especially concerning methods that have the potential to have a great impact in our lives. For this reason, this research focuses on determining the properties of some of the most common anomaly detection methods. A distinction is made between metrics used for retrospective historical monitoring and those used for prospective on-going monitoring with the focus on the latter situation. Metrics such as the recurrence interval and time-to-signal measures are therefore the most applicable. These metrics, in conjunction with control charts such as exponentially weighted moving average (EWMA) charts and cumulative sum (CUSUM) charts, are examined. Two new time-to-signal measures, the average time-between-signal events and the average signal event length, are introduced to better compare the recurrence interval with the time-to-signal properties of surveillance schemes. The relationship commonly thought to exist between the recurrence interval and the average time to signal is shown to not exist once autocorrelation is present in the statistics used for monitoring. This means that closer consideration needs to be paid to the selection of which of these metrics to report.
The properties of a commonly applied scan method are also studied carefully in the strictly temporal setting. The counts of incidences are assumed to occur independently over time and follow a Poisson distribution. Simulations are used to evaluate the method under changes in various parameters. In addition, there are two methods proposed in the literature for the calculation of the p-value, an adjustment based on the tests for previous time periods and the use of the recurrence interval with no adjustment for previous tests. The difference in these two methods is also considered. The quickness of the scan method in detecting an increase in the incidence rate as well as the number of false alarm events that occur and how long the method signals after the increase threat has passed are all of interest. These estimates from the scan method are compared to other attribute monitoring methods, mainly the Poisson CUSUM chart. It is shown that the Poisson CUSUM chart is typically faster in the detection of the increased incidence rate. / Ph. D.
|
384 |
Precision Neutrino Oscillations: Important Considerations for ExperimentsPestes, Rebekah Faith 26 May 2021 (has links)
Currently, we are in an era of neutrino physics in which neutrino oscillation experiments are focusing on doing precision measurements. In this dissertation, we investigate what is important to consider when doing these precise experiments, especially in light of significant unresolved anomalies. We look at four general categories of considerations: systematic uncertainties, fundamental assumptions, parameterization-dependence of interpretations, and Beyond the Standard Model (BSM) scenarios. By performing a simulation using GLoBES, we find that uncertainties in the fine structure of the reactor neutrino spectrum could be vitally important to JUNO, a reactor neutrino experiment being built in China, so a reference spectrum with comparable energy resolution to JUNO is needed in order to alleviate this uncertainty. In addition, we determine that with their fix of the fine structure problem, JUNO can test the existence of a quantum interference term in the oscillation probability. We also reason that the CP-violating phase is very parameterization dependent, and the Jarlskog invariant is better for talking about amounts of CP violation in neutrino oscillations. Finally, we discover that CP-violating neutrino Non-Standard Interactions (NSIs) could already be affecting the outcomes of T2K and NOνA, two accelerator neutrino experiments, and may be why there is a tension in these two data sets. / Doctor of Philosophy / Neutrinos are very weakly interacting, fundamental particles that are extremely plentiful in the universe. There are three known types (or flavors) of neutrinos, and the fact that they change flavors (or oscillate) informs us that their mass is not zero, but no experiments have been able to put a lower bound on the smallest neutrino mass. Now that experiments measuring neutrino oscillations have become more precise and some significant anomalies remain unresolved, there are considerations that have become important to investigate. In this paper, we look at four of these considerations:
• Uncertainties in the finer shapes in the energy spectrum of neutrinos coming from a nuclear reactor (Chapter 2): We find that these uncertainties could destroy the ability of the Jiangmen Underground Neutrino Observatory (JUNO) to meet one of its major goals, unless they measured the spectrum at a spot close to the reactor with a really good energy resolution (comparable to that of JUNO).
• An assumption about quantum mechanics being the foundation of particles and their interactions (Chapter 3): We determine that by heeding our warning in Chapter 2, JUNO will be able to test the existence of the term in the oscillation probability arising out of quantum interference.
• How the neutrino oscillation parameter known as the CP-violating phase is dependent on the parameterization scheme used for the matrix describing how the flavors mix to make neutrino oscillation possible (Chapter 4): We find that the parameterization dependence is drastic, and if we want to discuss how much CP violation (i.e. a measure of how neutrinos behave differently from their anti-matter counterparts) exists in neutrino oscillations, we should talk about a quantity called the Jarlskog invariant.
• The possibility of interactions existing between neutrinos and other particles that are not part of the Standard Model of Particle Physics, i.e. neutrino Non-Standard Interactions (NSIs) (Chapter 5): We discover that NSIs that are CP-violating can actually explain a current discrepancy between two neutrino oscillation experiments: Tokai to Kamioka Nuclear Decay Experiment (T2K) and NuMI Off-axis ν e Appearance (NOνA).
|
385 |
Anomaly crowd movement detection using machinelearning techniquesLongberg, Victor January 2024 (has links)
This master’s thesis investigates the application of anomaly detection techniques to analyze crowdmovements using cell location data, a topic of growing interest in public safety and policymaking. Thisresearch uses machine learning algorithms, specifically Isolation Forest and DBSCAN, to identify unusualmovement patterns within a large, unlabeled dataset. The study addresses the challenges inherent inprocessing and analyzing vast amounts of spatial and temporal data through a comprehensive method-ology that includes data preprocessing, feature engineering, and optimizing algorithm parameters. Thefindings highlight the feasibility of employing anomaly detection in real-world scenarios, demonstratingthe algorithms’ ability to detect anomalies and offering insights into crowd dynamics.
|
386 |
On the Effectiveness of Dimensionality Reduction for Unsupervised Structural Health Monitoring Anomaly DetectionSoleimani-Babakamali, Mohammad Hesam 19 April 2022 (has links)
Dimensionality reduction techniques (DR) enhance data interpretability and reduce space complexity, though at the cost of information loss. Such methods have been prevalent in the Structural Health Monitoring (SHM) anomaly detection literature. While DR is favorable in supervised anomaly detection, where possible novelties are known a priori, the efficacy is less clear in unsupervised detection. In this work, we perform a detailed assessment of the DR performance trade-offs to determine whether the information loss imposed by DR can impact SHM performance for previously unseen novelties. As a basis for our analysis, we rely on an SHM anomaly detection method operating on input signals' fast Fourier transform (FFT). FFT is regarded as a raw, frequency-domain feature that allows studying various DR techniques. We design extensive experiments comparing various DR techniques, including neural autoencoder models, to capture the impact on two SHM benchmark datasets exclusively. Results imply the loss of information to be more detrimental, reducing the novelty detection accuracy by up to 60\% with autoencoder-based DR. Regularization can alleviate some of the challenges though unpredictable. Dimensions of substantial vibrational information mostly survive DR; thus, the regularization impact suggests that these dimensions are not reliable damage-sensitive features regarding unseen faults. Consequently, we argue that designing new SHM anomaly detection methods that can work with high-dimensional raw features is a necessary research direction and present open challenges and future directions. / M.S. / Structural health monitoring (SHM) aids the timely maintenance of infrastructures, saving human lives and natural resources. Infrastructure will undergo unseen damages in the future. Thus, data-driven SHM techniques for handling unlabeled data (i.e., unsupervised learning) are suitable for real-world usage. Lacking labels and defined data classes, data instances are categorized through similarities, i.e., distances. Still, distance metrics in high-dimensional spaces can become meaningless. As a result, applying methods to reduce data dimensions is currently practiced, yet, at the cost of information loss. Naturally, a trade-off exists between the loss of information and the increased interpretability of low-dimensional spaces induced by dimensionality reduction procedures. This study proposes an unsupervised SHM technique that works with low and high-dimensional data to assess that trade-off. Results show the negative impacts of dimensionality reduction to be more severe than its benefits. Developing unsupervised SHM methods with raw data is thus encouraged for real-world applications.
|
387 |
ANOMALY DETECTION USING MACHINE LEARNING FORINTRUSION DETECTIONVaishnavi Rudraraju (18431880) 02 May 2024 (has links)
<p dir="ltr">This thesis examines machine learning approaches for anomaly detection in network security, particularly focusing on intrusion detection using TCP and UDP protocols. It uses logistic regression models to effectively distinguish between normal and abnormal network actions, demonstrating a strong ability to detect possible security concerns. The study uses the UNSW-NB15 dataset for model validation, allowing a thorough evaluation of the models' capacity to detect anomalies in real-world network scenarios. The UNSW-NB15 dataset is a comprehensive network attack dataset frequently used in research to evaluate intrusion detection systems and anomaly detection algorithms because of its realistic attack scenarios and various network activities.</p><p dir="ltr">Further investigation is carried out using a Multi-Task Neural Network built for binary and multi-class classification tasks. This method allows for the in-depth study of network data, making it easier to identify potential threats. The model is fine-tuned during successive training epochs, focusing on validation measures to ensure its generalizability. The thesis also applied early stopping mechanisms to enhance the ML model, which helps optimize the training process, reduces the risk of overfitting, and improves the model's performance on new, unseen data.</p><p dir="ltr">This thesis also uses blockchain technology to track model performance indicators, a novel strategy that improves data integrity and reliability. This blockchain-based logging system keeps an immutable record of the models' performance over time, which helps to build a transparent and verifiable anomaly detection framework.</p><p dir="ltr">In summation, this research enhances Machine Learning approaches for network anomaly detection. It proposes scalable and effective approaches for early detection and mitigation of network intrusions, ultimately improving the security posture of network systems.</p>
|
388 |
Anomaly Detection Through System and Program Behavior ModelingXu, Kui 15 December 2014 (has links)
Various vulnerabilities in software applications become easy targets for attackers. The trend constantly being observed in the evolution of advanced modern exploits is their growing sophistication in stealthy attacks. Code-reuse attacks such as return-oriented programming allow intruders to execute mal-intended instruction sequences on a victim machine without injecting external code. Successful exploitation leads to hijacked applications or the download of malicious software (drive-by download attack), which usually happens without the notice or permission from users.
In this dissertation, we address the problem of host-based system anomaly detection, specifically by predicting expected behaviors of programs and detecting run-time deviations and anomalies. We first introduce an approach for detecting the drive-by download attack, which is one of the major vectors for malware infection. Our tool enforces the dependencies between user actions and system events, such as file-system access and process execution. It can be used to provide real time protection of a personal computer, as well as for diagnosing and evaluating untrusted websites for forensic purposes. We perform extensive experimental evaluation, including a user study with 21 participants, thousands of legitimate websites (for testing false alarms), 84 malicious websites in the wild, as well as lab reproduced exploits. Our solution demonstrates a usable host-based framework for controlling and enforcing the access of system resources.
Secondly, we present a new anomaly-based detection technique that probabilistically models and learns a program's control flows for high-precision behavioral reasoning and monitoring. Existing solutions suffer from either incomplete behavioral modeling (for dynamic models) or overestimating the likelihood of call occurrences (for static models).
We introduce a new probabilistic anomaly detection method for modeling program behaviors. Its uniqueness is the ability to quantify the static control flow in programs and to integrate the control flow information in probabilistic machine learning algorithms. The advantage of our technique is the significantly improved detection accuracy. We observed 11 up to 28-fold of improvement in detection accuracy compared to the state-of-the-art HMM-based anomaly models. We further integrate context information into our detection model, which achieves both strong flow-sensitivity and context-sensitivity. Our context-sensitive approach gives on average over 10 times of improvement for system call monitoring, and 3 orders of magnitude for library call monitoring, over existing regular HMM methods.
Evaluated with a large amount of program traces and real-world exploits, our findings confirm that the probabilistic modeling of program dependences provides a significant source of behavior information for building high-precision models for real-time system monitoring. Abnormal traces (obtained through reproducing exploits and synthesized abnormal traces) can be well distinguished from normal traces by our model. / Ph. D.
|
389 |
Two Essays in Finance: Momentum Loses its Momentum, and Venture Capital Liquidity PressureBhattacharya, Debarati 01 April 2014 (has links)
My dissertation consists of two papers, one in the area of investment and the second in the area of corporate finance. The first paper examines robustness of momentum returns in the US stock market over the period 1965 to 2012. We find that momentum profits have become insignificant since the late 1990s partially driven by pronounced increase in the volatility of momentum profits in the last 14 years. Investigations of momentum profits in high and low volatility months address the concerns about unprecedented levels of market volatility in this period rendering momentum strategy unprofitable. Past returns, can no longer explain the cross-sectional variation in stock returns, even following up markets. We suggest three possible explanations for the declining momentum profits that involve uncovering of the anomaly by investors, decline in the risk premium on a macroeconomic factor, growth rate in industrial production in particular and relative improvement in market efficiency.
We study the impact of venture capital funds' (VC) liquidity concerns on the timing and outcome of their portfolio firms' exit events. We find that VC funds approaching the end of their lifespan are more likely to exit during cold exit market conditions. Such late exits are also less likely to be via initial public offerings (IPO). A one standard deviation increase in the age of a VC fund at the time of the exit event is associated with a 5 percentage points decline in the probability of an IPO vs. a trade sale from an unconditional probability of roughly 30%. Several tests indicate that the decline in IPOs with VC fund age is not caused by lower portfolio firm quality. Focusing on the aftermath of IPOs, VC-backed firms experience significantly larger trading volume and lower stock returns around lock-up expirations if they are backed by older funds, and this lock-up effect is amplified if there are multiple VC firms approaching the end of their lifespan. Altogether, our results suggest that the exit process is strongly influenced by VCs' liquidity considerations. / Ph. D.
|
390 |
A Deep Learning Approach to Predict Accident Occurrence Based on Traffic DynamicsKhaghani, Farnaz 05 1900 (has links)
Traffic accidents are of concern for traffic safety; 1.25 million deaths are reported each year. Hence, it is crucial to have access to real-time data and rapidly detect or predict accidents. Predicting the occurrence of a highway car accident accurately any significant length of time into the future is not feasible since the vast majority of crashes occur due to unpredictable human negligence and/or error. However, rapid traffic incident detection could reduce incident-related congestion and secondary crashes, alleviate the waste of vehicles’ fuel and passengers’ time, and provide appropriate information for emergency response and field operation. While the focus of most previously proposed techniques is predicting the number of accidents in a certain region, the problem of predicting the accident occurrence or fast detection of the accident has been little studied. To address this gap, we propose a deep learning approach and build a deep neural network model based on long short term memory (LSTM). We apply it to forecast the expected speed values on freeways’ links and identify the anomalies as potential accident occurrences. Several detailed features such as weather, traffic speed, and traffic flow of upstream and downstream points are extracted from big datasets. We assess the proposed approach on a traffic dataset from Sacramento, California. The experimental results demonstrate the potential of the proposed approach in identifying the anomalies in speed value and matching them with accidents in the same area. We show that this approach can handle a high rate of rapid accident detection and be implemented in real-time travelers’ information or emergency management systems. / M.S. / Rapid traffic accident detection/prediction is essential for scaling down non-recurrent conges- tion caused by traffic accidents, avoiding secondary accidents, and accelerating emergency system responses. In this study, we propose a framework that uses large-scale historical traffic speed and traffic flow data along with the relevant weather information to obtain robust traffic patterns. The predicted traffic patterns can be coupled with the real traffic data to detect anomalous behavior that often results in traffic incidents in the roadways. Our framework consists of two major steps. First, we estimate the speed values of traffic at each point based on the historical speed and flow values of locations before and after each point on the roadway. Second, we compare the estimated values with the actual ones and introduce the ones that are significantly different as an anomaly. The anomaly points are the potential points and times that an accident occurs and causes a change in the normal behavior of the roadways. Our study shows the potential of the approach in detecting the accidents while exhibiting promising performance in detecting the accident occurrence at a time close to the actual time of occurrence.
|
Page generated in 0.0357 seconds