• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 59
  • 59
  • 59
  • 40
  • 19
  • 15
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Security for e-commerce with specific reference to SAP

Wentzel, Jan Johannes 06 December 2011 (has links)
M.Comm. / Poorly controlled E-Commerce vulnerabilities expose organisations to fraud that can result in major financial losses and embarrassment. Also, fraud can be committed while the perpetrator remains anonymous. It is therefore important that the auditor understand the security relating to SAP's E-Commerce solutions. This short dissertation will focus on the security features relating to E-Commerce with specific reference to SAP. The results of this investigation will be used to develop a model, which may be used to assist auditors to identify and evaluate the security controls in a typical E-Commerce environment as well as those present in a SAP R/3 environment.
22

An investigation into tools and protocols for commercial audio web-site creation

Ndinga, S'busiso Simon January 2000 (has links)
This thesis presents a feasibility study of a Web-based digital music library and purchasing system. It investigates the current status of the enabling technologies for developing such a system. An analysis of various Internet audio codecs, streaming audio protocols, Internet credit card payment security methods, and ways for accessing remote Web databases is presented. The objective of the analysis is to determine the viability and the economic benefits of using these technologies when developing systems that facilitate music distribution over the Internet. A prototype of a distributed digital music library and purchasing system named WAPS (for Web-based Audio Purchasing System) was developed and implemented in the Java programming language. In this thesis both the physical and the logical component elements of WAPS are explored in depth so as to provide an insight into the inherent problems of creating such a system, as well as the overriding benefits derived from the creation of such a system.
23

Real-time risk analysis : a modern perspective on network security with a prototype

16 August 2012 (has links)
M.Sc. / The present study was undertaken in a bid within the realm of the existing Internet working environment to meet the need for a more secure network-security process in terms of which possible risks to be incurred by Internet users could be identified and controlled by means of the appropriate countermeasures in real time. On launching the study, however, no such formal risk-analysis model has yet been developed specifically to effect risk analysis in real time. This, then, gave rise to the development of a prototype specifically aimed at the identification of risks that could pose a threat to Internet users' private data — the so-called "Real-time Risk Analysis" (RtRA) prototype. In so doing, the principal aim of the study, namely to implement the RtRA prototype, was realised. Following, an overview of the research method employed to realise the objectives of the study. Firstly, background information on and the preamble to the issues and problems to be addressed were provided, as well as a well-founded motivation for the study. The latter included theoretical studies on current network security and Transmission Control Protocol/Internet Protocol (TCP/IP). Secondly, the study of existing TCP/IP packet-intercepting tools available on the Internet brought deeper insight into how TCP/IP packets are to be intercepted and handled. In the third instance, the most recent development in network security — firewalls — came under discussion. The latter technology represents a "super-developed" TCP/IP packet-intercepting tool that implements the best known security measures. In addition, the entire study was based on firewall technology and the model that was developed related directly to firewalls. Fourthly, a prototype, consisting of three main modules, was implemented in a bid to prove that RtRA is indeed tenable and practicable. In so doing, the second module of the prototype, namely the real-time risk-identification and countermeasure-execution module, was given special emphasis. The modus operandi of the said prototype was then illustrated by means of a case study undertaken in a simulated Internet working environment. The study culminated in a summation of the results of and the conclusions reached on the strength of the research. Further problem areas, which could become the focal points of future research projects, were also touched upon.
24

Analysis of cybercrime activity: perceptions from a South African financial bank

Obeng-Adjei, Akwasi January 2017 (has links)
Research report submitted to the School of Economic and Business Sciences, University of the Witwatersrand in partial fulfilment of the requirements for the degree of Master of Commerce (Information Systems) by coursework and research. Johannesburg, 28 February 2017. / This study is informed by very little empirical research in the field of cybercrime and specifically in the context of South African banks. The study bridges this gap in knowledge by analyzing the cybercrime phenomenon from the perspective of a South African bank. It also provides a sound basis for conducting future studies using a different perspective. In order to achieve this, an interpretive research approach was adopted using a case study in one of the biggest banks in South Africa where cybercrime is currently a topical issue and one that is receiving attention from senior management. Cohen and Felson (1979) Routine Activity Theory was used as a theoretical lens to formulate a conceptual framework which informed the data collection, analysis and synthesis of cybercrime in the selected bank. Primary data was obtained via semistructured interviews. Secondary data was also obtained which allowed for data triangulation. From the perspective of a South African bank, the study concluded that weak security and access controls, poor awareness and user education, prevalent use of the internet, low conviction rates and perceived material gain are the major factors that lead to cybercriminal activity. In order to curb the ever increasing rate of cybercrime, South African banking institutions should consider implementing stronger security and access controls to safeguard customer information, increase user awareness and education, implement effective systems and processes and actively participate in industry wide focus groups. The transnational nature of cybercrime places an onus on all banks in South Africa and other countries to collaborate and define a joint effort to combat the increasing exposure to cybercriminal activity. The use of the Routine Activity Theory provided an avenue to study the cybercrime phenomenon through a different theoretical lens and aided a holistic understanding of the trends and the behavioral attributes contributing to cybercriminal activity that can help South African banks model practical solutions to proactively combat the splurge of cybercrime. Keywords: Cybercrime, internet, crime, computer networks, Routine Activity Theory, South African banks. / GR2018
25

An approach to protecting online personal information in Macau government

Sou, Sok Fong January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
26

Data mining heuristic-¬based malware detection for android applications

Unknown Date (has links)
The Google Android mobile phone platform is one of the dominant smartphone operating systems on the market. The open source Android platform allows developers to take full advantage of the mobile operation system, but also raises significant issues related to malicious applications (Apps). The popularity of Android platform draws attention of many developers which also attracts the attention of cybercriminals to develop different kinds of malware to be inserted into the Google Android Market or other third party markets as safe applications. In this thesis, we propose to combine permission, API (Application Program Interface) calls and function calls to build a Heuristic-­Based framework for the detection of malicious Android Apps. In our design, the permission is extracted from each App’s profile information and the APIs are extracted from the packed App file by using packages and classes to represent API calls. By using permissions, API calls and function calls as features to characterize each of Apps, we can develop a classifier by data mining techniques to identify whether an App is potentially malicious or not. An inherent advantage of our method is that it does not need to involve any dynamic tracking of the system calls but only uses simple static analysis to find system functions from each App. In addition, Our Method can be generalized to all mobile applications due to the fact that APIs and function calls are always present for mobile Apps. Experiments on real-­world Apps with more than 1200 malwares and 1200 benign samples validate the algorithm performance. Research paper published based on the work reported in this thesis: Naser Peiravian, Xingquan Zhu, Machine Learning for Android Malware Detection Using Permission and API Calls, in Proc. of the 25th IEEE International Conference on Tools with Artificial Intelligence (ICTAI) – Washington D.C, November 4-­6, 2013. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2013.
27

Understanding Flaws in the Deployment and Implementation of Web Encryption

Sivakorn, Suphannee January 2018 (has links)
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been “tested” (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations. In this dissertation, in an effort to improve users’ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a user’s HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the user’s saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the user’s account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks. Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage. Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks.
28

A statistical process control approach for network intrusion detection

Park, Yongro 13 January 2005 (has links)
Intrusion detection systems (IDS) have a vital role in protecting computer networks and information systems. In this thesis we applied an SPC monitoring concept to a certain type of traffic data in order to detect a network intrusion. We developed a general SPC intrusion detection approach and described it and the source and the preparation of data used in this thesis. We extracted sample data sets that represent various situations, calculated event intensities for each situation, and stored these sample data sets in the data repository for use in future research. A regular batch mean chart was used to remove the sample datas inherent 60-second cycles. However, this proved too slow in detecting a signal because the regular batch mean chart only monitored the statistic at the end of the batch. To gain faster results, a modified batch mean (MBM) chart was developed that met this goal. Subsequently, we developed the Modified Batch Mean Shewhart chart, the Modified Batch Mean Cusum chart, and the Modified Batch Mean EWMA chart and analyzed the performances of each one on simulated data. The simulation studies showed that the MBM charts perform especially well with large signals ?the type of signal typically associated with a DOS intrusion. The MBM Charts can be applied two ways: by using actual control limits or by using robust control limits. The actual control limits must be determined by simulation, but the robust control limits require nothing more than the use of the recommended limits. The robust MBM Shewhart chart was developed based on choosing appropriate values based on batch size. The robust MBM Cusum chart and robust MBM EWMA chart were developed on choosing appropriate values of charting parameters.
29

Investigation of a router-based approach to defense against Distributed Denial-of-Service (DDoS) attack

Chan, Yik-Kwan, Eric., 陳奕鈞. January 2004 (has links)
published_or_final_version / abstract / toc / Computer Science and Information Systems / Master / Master of Philosophy
30

Secure object spaces for global information retrieval (SOSGIR)

Cheung, Yee-him., 張貽謙. January 2000 (has links)
published_or_final_version / abstract / toc / Electrical and Electronic Engineering / Master / Master of Philosophy

Page generated in 0.089 seconds