31 |
Enforcing Authorization and Attribution of Internet Traffic at the Router via Transient AddressingJohnson, Eamon B. 30 January 2012 (has links)
No description available.
|
32 |
Intelligent quality performance assessment for e-banking security using fuzzy logicAburrous, Maher R., Hossain, M. Alamgir, Thabatah, F., Dahal, Keshav P. January 2008 (has links)
Yes / Security has been widely recognized as one of the
main obstacles to the adoption of Internet banking
and it is considered an important aspect in the
debate over challenges facing internet banking. The
performance evaluation of e-banking websites
requires a model that enables us to analyze the
various imperative factors and criteria related to the
quality and performance of e-banking websites. Ebanking
site evaluation is a complex and dynamic
problem involving many factors, and because of the
subjective considerations and the ambiguities
involved in the assessment, Fuzzy Logic (FL) model
can be an effective tool in assessing and evaluating
of e-banking security performance and quality. In
this paper, we propose an intelligent performance
assessment model for evaluating e-banking security
websites. The proposed model is based on FL
operators and produces four measures of security
risk attack dimensions: direct internal attack,
communication tampering attack, code programming
attack and denial of service attack with a
hierarchical ring layer structure. Our experimental
results show that direct internal attack risk has a
large impact on e-banking security performance. The
results also confirm that the risk of direct internal
attack for e-banking dynamic websites is doubled
that of all other attacks.
|
33 |
Innovative location based scheme for Internet Security Protocol : a proposed location based scheme N-Kerberos Security Protocol using intelligent logic of believes, particularly by modified BAN logicAbdelmajid, Nabih T. January 2010 (has links)
The importance of the data authentication has resulted in the science of the data protection. Interest in this knowledge has been growing due to the increase in privacy of the user's identity, especially after the widespread use of online transactions. Many security techniques are available to maintain the privacy of the user's identity. These include password, smart card or token and face recognition or finger print. But unfortunately, the possibility to duplicate the identity of a user is still possible. Recently, specialists used the user's physical location as a new factor in order to increase the strength of the verification of the user's identity. This thesis focused on the authentication-based user's location. It is based on the idea of using the Global Position System in order to verify the user identity. Improving Kerberos protocol using GPS signal is proposed in order to eliminate the effect of replay attack. This proposal does not expect a high performance from the user during the implementation of the security system. Moreover, to give users more confidence to use security protocol, it has to be evaluated before accepting it. Thus, a measurement tool used to validate protocols called BAN logic was described. In this thesis, a new form of BAN logic which aims to raise the efficiency checking process of the protocol protection strength using the GPS signal is proposed. The proposed form of Kerberos protocol has been analysed using the new form of BAN logic. The new scheme has been tested and compared with the existing techniques to demonstrate its merits and capabilities.
|
34 |
Performance Evaluation of Data Integrity Mechanisms for Mobile AgentsGunupudi, Vandana 12 1900 (has links)
With the growing popularity of e-commerce applications that use software agents, the protection of mobile agent data has become imperative. To that end, the performance of four methods that protect the data integrity of mobile agents is evaluated. The methods investigated include existing approaches known as the Partial Result Authentication Codes, Hash Chaining, and Set Authentication Code methods, and a technique of our own design, called the Modified Set Authentication Code method, which addresses the limitations of the Set Authentication Code method. The experiments were run using the DADS agent system (developed at the Network Research Laboratory at UNT), for which a Data Integrity Module was designed. The experimental results show that our Modified Set Authentication Code technique performed comparably to the Set Authentication Code method.
|
35 |
Real-time risk analysis : a modern perspective on network security with a prototype16 August 2012 (has links)
M.Sc. / The present study was undertaken in a bid within the realm of the existing Internet working environment to meet the need for a more secure network-security process in terms of which possible risks to be incurred by Internet users could be identified and controlled by means of the appropriate countermeasures in real time. On launching the study, however, no such formal risk-analysis model has yet been developed specifically to effect risk analysis in real time. This, then, gave rise to the development of a prototype specifically aimed at the identification of risks that could pose a threat to Internet users' private data — the so-called "Real-time Risk Analysis" (RtRA) prototype. In so doing, the principal aim of the study, namely to implement the RtRA prototype, was realised. Following, an overview of the research method employed to realise the objectives of the study. Firstly, background information on and the preamble to the issues and problems to be addressed were provided, as well as a well-founded motivation for the study. The latter included theoretical studies on current network security and Transmission Control Protocol/Internet Protocol (TCP/IP). Secondly, the study of existing TCP/IP packet-intercepting tools available on the Internet brought deeper insight into how TCP/IP packets are to be intercepted and handled. In the third instance, the most recent development in network security — firewalls — came under discussion. The latter technology represents a "super-developed" TCP/IP packet-intercepting tool that implements the best known security measures. In addition, the entire study was based on firewall technology and the model that was developed related directly to firewalls. Fourthly, a prototype, consisting of three main modules, was implemented in a bid to prove that RtRA is indeed tenable and practicable. In so doing, the second module of the prototype, namely the real-time risk-identification and countermeasure-execution module, was given special emphasis. The modus operandi of the said prototype was then illustrated by means of a case study undertaken in a simulated Internet working environment. The study culminated in a summation of the results of and the conclusions reached on the strength of the research. Further problem areas, which could become the focal points of future research projects, were also touched upon.
|
36 |
Analysis of cybercrime activity: perceptions from a South African financial bankObeng-Adjei, Akwasi January 2017 (has links)
Research report submitted to the School of Economic and Business Sciences, University of the Witwatersrand in partial fulfilment of the requirements for the degree of Master of Commerce (Information Systems) by coursework and research. Johannesburg, 28 February 2017. / This study is informed by very little empirical research in the field of cybercrime and specifically in the context of South African banks. The study bridges this gap in knowledge by analyzing the cybercrime phenomenon from the perspective of a South African bank. It also provides a sound basis for conducting future studies using a different perspective. In order to achieve this, an interpretive research approach was adopted using a case study in one of the biggest banks in South Africa where cybercrime is currently a topical issue and one that is receiving attention from senior management. Cohen and Felson (1979) Routine Activity Theory was used as a theoretical lens to formulate a conceptual framework which informed the data collection, analysis and synthesis of cybercrime in the selected bank. Primary data was obtained via semistructured interviews. Secondary data was also obtained which allowed for data triangulation. From the perspective of a South African bank, the study concluded that weak security and access controls, poor awareness and user education, prevalent use of the internet, low conviction rates and perceived material gain are the major factors that lead to cybercriminal activity. In order to curb the ever increasing rate of cybercrime, South African banking institutions should consider implementing stronger security and access controls to safeguard customer information, increase user awareness and education, implement effective systems and processes and actively participate in industry wide focus groups. The transnational nature of cybercrime places an onus on all banks in South Africa and other countries to collaborate and define a joint effort to combat the increasing exposure to cybercriminal activity. The use of the Routine Activity Theory provided an avenue to study the cybercrime phenomenon through a different theoretical lens and aided a holistic understanding of the trends and the behavioral attributes contributing to cybercriminal activity that can help South African banks model practical solutions to proactively combat the splurge of cybercrime.
Keywords: Cybercrime, internet, crime, computer networks, Routine Activity Theory, South African banks. / GR2018
|
37 |
An approach to protecting online personal information in Macau governmentSou, Sok Fong January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
|
38 |
Data mining heuristic-¬based malware detection for android applicationsUnknown Date (has links)
The Google Android mobile phone platform is one of the dominant smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications (Apps). The popularity of Android platform draws attention of many developers which also attracts the attention of cybercriminals to develop different kinds of malware to be inserted into the Google Android Market or other third party markets as safe applications. In this thesis, we propose to combine permission, API (Application Program Interface) calls and function calls to build a Heuristic-Based framework for the detection of malicious Android Apps. In our design, the permission is extracted from each App’s profile information and the APIs are extracted from the packed App file by using packages and classes to represent API calls. By using permissions, API calls and function calls as features to characterize each of Apps, we can develop a classifier by data mining techniques to identify whether an App is potentially malicious or not. An inherent advantage of our method is that it does not need to involve any dynamic tracking of the system calls but only uses simple static analysis to find system functions from each App. In addition, Our Method can be generalized to all mobile applications due to the fact that APIs and function calls are always present for mobile Apps. Experiments on real-world Apps with more than 1200 malwares and 1200 benign samples validate the algorithm performance.
Research paper published based on the work reported in this thesis:
Naser Peiravian, Xingquan Zhu, Machine Learning for Android Malware Detection
Using Permission and API Calls, in Proc. of the 25th IEEE International Conference on
Tools with Artificial Intelligence (ICTAI) – Washington D.C, November 4-6, 2013. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2013.
|
39 |
Understanding Flaws in the Deployment and Implementation of Web EncryptionSivakorn, Suphannee January 2018 (has links)
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been “tested” (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations.
In this dissertation, in an effort to improve users’ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a user’s HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the user’s saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the user’s account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks.
Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage.
Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks.
|
40 |
A statistical process control approach for network intrusion detectionPark, Yongro 13 January 2005 (has links)
Intrusion detection systems (IDS) have a vital role in protecting computer networks and information systems. In this thesis we applied an SPC monitoring concept to a certain type of traffic data in order to detect a network intrusion.
We developed a general SPC intrusion detection approach and described it and the source and the preparation of data used in this thesis. We extracted sample data sets that represent various situations, calculated event intensities for each situation, and stored these sample data sets in the data repository for use in future research.
A regular batch mean chart was used to remove the sample datas inherent 60-second cycles. However, this proved too slow in detecting a signal because the regular batch mean chart only monitored the statistic at the end of the batch. To gain faster results, a modified batch mean (MBM) chart was developed that met this goal. Subsequently, we developed the Modified Batch Mean Shewhart chart, the Modified Batch Mean Cusum chart, and the Modified Batch Mean EWMA chart and analyzed the performances of each one on simulated data. The simulation studies showed that the MBM charts perform especially well with large signals ?the type of signal typically associated with a DOS intrusion.
The MBM Charts can be applied two ways: by using actual control limits or by using robust control limits. The actual control limits must be determined by simulation, but the robust control limits require nothing more than the use of the recommended limits. The robust MBM Shewhart chart was developed based on choosing appropriate values based on batch size. The robust MBM Cusum chart and robust MBM EWMA chart were developed on choosing appropriate values of charting parameters.
|
Page generated in 0.0582 seconds