11 |
Security Analysis and Improvement Model for Web-based ApplicationsWang, Yong 14 January 2010 (has links)
Today the web has become a major conduit for information. As the World Wide
Web?s popularity continues to increase, information security on the web has become an
increasing concern. Web information security is related to availability, confidentiality,
and data integrity. According to the reports from http://www.securityfocus.com in May
2006, operating systems account for 9% vulnerability, web-based software systems
account for 61% vulnerability, and other applications account for 30% vulnerability.
In this dissertation, I present a security analysis model using the Markov Process
Model. Risk analysis is conducted using fuzzy logic method and information entropy
theory. In a web-based application system, security risk is most related to the current
states in software systems and hardware systems, and independent of web application
system states in the past. Therefore, the web-based applications can be approximately
modeled by the Markov Process Model. The web-based applications can be conceptually
expressed in the discrete states of (web_client_good; web_server_good,
web_server_vulnerable, web_server_attacked, web_server_security_failed; database_server_good, database_server_vulnerable, database_server_attacked,
database_server_security_failed) as state space in the Markov Chain. The vulnerable
behavior and system response in the web-based applications are analyzed in this
dissertation. The analyses focus on functional availability-related aspects: the probability
of reaching a particular security failed state and the mean time to the security failure of a
system. Vulnerability risk index is classified in three levels as an indicator of the level of
security (low level, high level, and failed level). An illustrative application example is
provided. As the second objective of this dissertation, I propose a security improvement
model for the web-based applications using the GeoIP services in the formal methods. In
the security improvement model, web access is authenticated in role-based access control
using user logins, remote IP addresses, and physical locations as subject credentials to
combine with the requested objects and privilege modes. Access control algorithms are
developed for subjects, objects, and access privileges. A secure implementation
architecture is presented. In summary, the dissertation has developed security analysis
and improvement model for the web-based application. Future work will address Markov
Process Model validation when security data collection becomes easy. Security
improvement model will be evaluated in performance aspect.
|
12 |
Scalable and adaptable security modelling and analysis.Hong, Jin Bum January 2015 (has links)
Modern networked systems are complex in such a way that assessing the security of them is a difficult task. Security models are widely used to analyse the security of these systems, which are capable of evaluating the complex relationship between network components. Security models can be generated by identifying vulnerabilities, threats (e.g., cyber attacks), network configurations, and reachability of network components. These network components are then combined into a single model to evaluate how an attacker may penetrate through the networked system. Further, countermeasures can be enforced to minimise cyber attacks based on security analysis. However, modern networked systems are becoming large sized and dynamic (e.g., Cloud Computing systems). As a result, existing security models suffer from scalability problem, where it becomes infeasible to use them for modern networked systems that contain hundreds and thousands of hosts and vulnerabilities. Moreover, the dynamic nature of modern networked systems requires a responsive update in the security model to monitor how these changes may affect the security, but there is a lack of capabilities to efficiently manage these changes with existing security models. In addition, existing security models do not provide functionalities to capture and analyse the security of unknown attacks, where the combined effects of both known and unknown attacks can create unforeseen attack scenarios that may not be detected or mitigated. Therefore, the three goals of this thesis are to (i) develop security modelling and analysis methods that can scale to a large number of network components and adapts to changes in the networked system; (ii) develop efficient security assessment methods to formulate countermeasures; and (iii) develop models and metrics to incorporate and assess the security of unknown attacks.
A lifecycle of security models is introduced in this thesis to concisely describe performance and functionalities of modern security models. The five phases in the lifecycle of security models are: (1) Preprocessing, (2) Generation, (3) Representation, (4) Evaluation, and (5) Modification.
To achieve goal (i), a hierarchical security model is developed to reduce the computational costs of assessing the security while maintaining all security information, where each layer captures different security information. Then, a comparative analysis is presented to show the scalability and adaptability of security models. The complexity analysis showed that the hierarchical security model has better or equivalent complexities in all phases of the lifecycle in comparison to existing security models, while the performance analysis showed that in fact it is much more scalable in practical network scenarios.
To achieve goal (ii), security assessment methods based on importance measures are developed. Network centrality measures are used to identify important hosts in the networked systems, and security metrics are used to identify important vulnerabilities in the host. Also, new network centrality measures are developed to improvise the lack of accuracy of existing network centrality measures when the attack scenarios consist of attackers located inside the networked system. Important hosts and vulnerabilities are identified using efficient algorithms with a polynomial time complexity, and the accuracy of these algorithms are shown as nearly equivalent to the naive method through experiments, which has an exponential complexity.
To achieve goal (iii), unknown attacks are incorporated into the hierarchical security model and the combined effects of both known and unknown attacks are analysed. Algorithms taking into account all possible attack scenarios associated with unknown attacks are used to identify significant hosts and vulnerabilities. Approximation algorithms based on dynamic programming and greedy algorithms are also developed to improve the performance. Mitigation strategies to minimise the effects of unknown attacks are formulated on the basis of significant hosts and vulnerabilities identified in the analysis. Results show that mitigation strategies formulated on the basis of significant hosts and vulnerabilities can significantly reduce the system risk in comparison to randomly applying mitigations.
In summary, the contributions of this thesis are: (1) the development and evaluation of the hierarchical security model to enhance the scalability and adaptability of security modelling and analysis; (2) a comparative analysis of security models taking into account scalability and adaptability; (3) the development of security assessment methods based on importance measures to identify important hosts and vulnerabilities in the networked system and evaluating their efficiencies in terms of accuracies and performances; and (4) the development of security analysis taking into account unknown attacks, which consists of evaluating the combined effects of both known and unknown attacks.
|
13 |
Privacy preservation in data mining through noise additionIslam, Md Zahidul January 2008 (has links)
Research Doctorate - Doctor of Philosophy (PhD) / Due to advances in information processing technology and storage capacity, nowadays huge amount of data is being collected for various data analyses. Data mining techniques, such as classification, are often applied on these data to extract hidden information. During the whole process of data mining the data get exposed to several parties and such an exposure potentially leads to breaches of individual privacy. This thesis presents a comprehensive noise addition technique for protecting individual privacy in a data set used for classification, while maintaining the data quality. We add noise to all attributes, both numerical and categorical, and both to class and non-class, in such a way so that the original patterns are preserved in a perturbed data set. Our technique is also capable of incorporating previously proposed noise addition techniques that maintain the statistical parameters of the data set, including correlations among attributes. Thus the perturbed data set may be used not only for classification but also for statistical analysis. Our proposal has two main advantages. Firstly, as also suggested by our experimental results the perturbed data set maintains the same or very similar patterns as the original data set, as well as the correlations among attributes. While there are some noise addition techniques that maintain the statistical parameters of the data set, to the best of our knowledge this is the first comprehensive technique that preserves the patterns and thus removes the so called Data Mining Bias from the perturbed data set. Secondly, re-identification of the original records directly depends on the amount of noise added, and in general can be made arbitrarily hard, while still preserving the original patterns in the data set. The only exception to this is the case when an intruder knows enough about the record to learn the confidential class value by applying the classifier. However, this is always possible, even when the original record has not been used in the training data set. In other words, providing that enough noise is added, our technique makes the records from the training set as safe as any other previously unseen records of the same kind. In addition to the above contribution, this thesis also explores the suitability of pre-diction accuracy as a sole indicator of data quality, and proposes technique for clustering both categorical values and records containing such values.
|
14 |
A Method for Analyzing Security of SOA-basd SystemsLu, Qifei, Wang, Zhishun January 2010 (has links)
SOA-based systems o er high degree of exibility and interoperabil- ity. However, the securing of SOA-based applications is still a challenge. Although some related techniques have been proposed and presented in academia and industry, it is still dicult to check SOA quality in security aspect from an architecture view. In this thesis project, a method for security analysis in SOA is intro- duced and investigated. The method intends to be used for analyzing security of SOA-based systems on architecture level. To demonstrate the method, a prototype supporting the method is introduced and imple- mented. And the method and prototype are also evaluated respectively based on Technology Acceptance Model. The evaluation result shows that the prototype supporting the method is a promising inspection tool to detect software vulnerability.
|
15 |
Design and Analysis of Self-protection : Adaptive Security for Software-Intensive SystemsSkandylas, Charilaos January 2020 (has links)
Today’s software landscape features a high degree of complexity, frequent changes in requirements and stakeholder goals, and uncertainty. Uncertainty and high complexity imply a threat landscape where cybersecurity attacks are a common occurrence, while their consequences are often severe. Self-adaptive systems have been proposed to mitigate the complexity and frequent degree of change by adapting at run-time to deal with situations not known at design time. They, however, are not immune to attacks, as they themselves suffer from high degrees of complexity and uncertainty. Therefore, systems that can dynamically defend themselves from adversaries are required. Such systems are called self-protecting systems and aim to identify, analyse and mitigate threats autonomously. This thesis contributes two approaches towards the goal of providing systems with self-protection capabilities. The first approach aims to enhance the security of architecture-based selfadaptive systems and equip them with (proactive) self-protection capabilities that reduce the exposed attack surface. We target systems where information about the system components and its adaptation decisions is available, and control over its adaptation is also possible. We formally model the security of the system and provide two methods to analyze its security that help us rank adaptations in terms of their security level: a method based on quantitative risk assessment and a method based on probabilistic verification. The results indicate an improvement to the system security when either of our solutions is employed. However, only the second method can provide self-protecting capabilities. We have identified a direct relationship between security and performance overhead, i.e., higher security guarantees impose analogously higher performance overhead. The second approach targets open decentralized systems where we have limited information about and control over the system entities. Therefore, we attempt to employ decentralized information flow control mechanisms to enforce security by controlling interactions among the system elements. We extend a classical decentralized information flow control model by incorporating trust and adding adaptation capabilities that allow the system to identify security threats and self-organize to maximize the average trust between the system entities. We arrange entities of the system in trust hierarchies that enforce security policies among their elements and can mitigate security issues raised by the openness and uncertainty in the context and environment, without the need for a trusted central controller. The experiment results show that a reasonable level of trust can be achieved and at the same time confidentiality and integrity can be enforced with a low impact on the throughput and latency of messages exchanged in the system.
|
16 |
Security Price ForecastingNebbia, Ralph J. 01 August 1972 (has links)
The purpose of this paper is to develop a new and practical technique for improving the art of forecasting security price movements. The desire to forecast stock market fluctuations have led many analysts to employ different predicting tools.
|
17 |
Security Properties of Virtual Remotes and Spooking their ViolationsJoshua David Oetting Majors (18390504) 18 April 2024 (has links)
<p dir="ltr">As Smart TV devices become more prevalent in our lives, it becomes increasingly important to evaluate the security of these devices. In addition to a smart and connected ecosystem through apps, Smart TV devices expose a WiFi remote protocol, that provides a virtual remote capability and allows a WiFi enabled device (e.g. a Smartphone) to control the Smart TV. The WiFi remote protocol might pose certain security risks that are not present in traditional TVs. In this paper, we assess the security of WiFi remote protocols by first identifying the desired security properties so that we achieve the same level of security as in traditional TVs. Our analysis of four popular Smart TV platforms, Android TV, Amazon FireOS, Roku OS, and WebOS (for LG TVs), revealed that <i>all these platforms violate one or more of the identified security properties</i>. To demonstrate the impact of these flaws, we develop Spook, which uses one of the commonly violated properties of a secure WiFi remote protocol to pair an Android mobile as a software remote to an Android TV. Subsequently, we hijack the Android TV device through the device debugger, enabling complete remote control of the device. All our findings have been communicated to the corresponding vendors. Google <i>acknowledged our findings</i> as a security vulnerability, assigned it a CVE, and released patches to the Android TV OS to partially mitigate the attack. We argue that these patches provide a stopgap solution without ensuring that WiFi remote protocol has all the desired security properties. We design and implement a WiFi remote protocol in the Android ecosystem using ARM TrustZone. Our evaluation shows that the proposed defense satisfies all the security properties and ensures that we have the flexibility of virtual remote without compromising security.</p>
|
18 |
Neural Networks: Building a Better Index FundSacks, Maxwell 01 January 2017 (has links)
Big data has become a rapidly growing field amongst firms in the financial sector and thus many companies and researchers have begun implementing machine learning methods to sift through large portions of data. From this data, investment management firms have attempted to automate investment strategies, some successful and some unsuccessful. This paper will investigate an investment strategy by using a deep neural network to see whether the stocks picked from the network will out or underperform the Russell 2000.
|
19 |
Momentum, Nonlinear Price Discovery and Asymmetric Spillover: Sovereign Credit Risk and Equity Markets of Emerging Countries andNgene, Geoffrey M 18 May 2012 (has links)
In Chapter 1, I hypothesize that there is a differential response by agents to changes in sovereign credit or default risk in both quiet (low default risk) and turbulent markets (high default risk). These market conditions create two different states of the market (world) or regimes. Investors and policy makers respond differently in the two regimes but the response in the turbulent market condition is amplified as policy makers attempt to smoothen the fluctuations and uncertainty while investors rebalance their portfolios in an attempt to hedge against downside risk of wealth loss. In the two regimes, the short run and long run dynamic relationships between any two cointegrated assets may change. To capture this phenomenon, this study tests for nonlinearities that may characterize the regimes, how cointegration relationships, short term dynamic interaction and price discovery (speed of adjustment to new information between two assets) may change in alternative regimes. To this end, I employ threshold cointegration, threshold vector error correction model (TVECM) asymmetrical return spillover modeling for sovereign credit default swaps (CDS), bonds and equity markets of seventeen emerging markets from four geographical regions. I find that there is non-linear cointegration and momentum in long-run adjustment process in 43/51 spreads analyzed. All countries analyzed have at least 2/6 possible regime specific asymmetric price discovery process. The study also finds evidence in support of asset substitution hypothesis and news-based hypothesis of financial contagions in sovereign CDS, bond and equity markets. The findings have important implications for asset allocation and portfolio rebalancing decisions by investors, policy intervention in financial markets, risk management and regime specific short and/or long term dynamic interactions among assets held in a portfolio as well as nonlinear speed of adjustment to new information.
In chapter 2, I hypothesize that financial intermediaries can be categorized into bank-based institutions (BBIs) and market-based institutions (MBIs). MBIs and BBIs are under different regulatory agencies. Traditionally, only BBIs, regulated by the Fed, are used as conduits of transmitting liquidity and monetary policy into real economy and financial markets yet MBIs also play important role in providing liquidity and stability in financial markets. I use two tools of monetary policy (Federal fund rate and monetary aggregate) under two monetary policy regimes to investigate the impact of monetary policy under each regime on the liquidity of MBIs and BBIs. I investigate whether MBIs be used as conduits of transmitting monetary policy and liquidity in the market and if they should, under what economic and financial conditions (Regimes) should they be used. Moreover, what monetary policy tool is more effective for MBIs relative to BBIs under different regimes? Using Threshold vector auto-regressions and regime specific impulse response functions, I find that liquidity of BBIs and MBIs respond differently to different monetary policy tools under different regimes. Moreover, monetary policies are uncertain and vary over time. The Fed cannot continue to ignore MBIs in formulating and implementing monetary policy. Moreover, monetary aggregate policy is more effective when used on MBIs during contractionary monetary policy intervention (economic downturn) while Federal fund rate is more effective when used on BBIs under expansionary monetary policy.
|
20 |
European Stock Market Contagion during Sovereign Debt Crisis and the Effects of Macroeconomic Announcements on the Correlations of Gold,Dollar and Stock ReturnsLi, Ziyu 17 May 2013 (has links)
The first part of this dissertation examines the presence of the financial contagion across European stock markets with respect to the Greece sovereign debt crisis by estimating the time-varying conditional correlations of stock returns between Greece and other European countries over 2001 to 2012. We find that the correlations vary over time and reach the peaks in the late 2008 during theU.S.subprime crisis, and in the beginning of 2010 of the height of European debt crisis. Further, the correlations between stock index returns of Greece and Spain, France, Ireland, Netherlands are significantly increased by Greek sovereign credit rating downgrade announcements.
The second part of this dissertation examines the correlations of gold, dollar and U.S. stock returns over 2001 to 2012 using ADCC-GARCH model. The conditional correlations of gold-dollar returns are negative during all sub-sample periods and significantly increase in magnitude during both subprime crisis and sovereign debt crisis. The conditional correlations of gold-stock returns are positive on average over time. However, gold-stock correlation falls below zero during subprime crisis and sovereign debt crisis. Gold-stock correlation is significantly negatively affected by positive CPI announcements. And gold-dollar correlation is significantly negatively affected by negative GDP announcements and positive unemployment announcements. The effects of macroeconomic announcements are stronger during economic recessions.
|
Page generated in 0.045 seconds