• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 19
  • 18
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 285
  • 285
  • 285
  • 279
  • 82
  • 72
  • 59
  • 52
  • 42
  • 40
  • 40
  • 40
  • 38
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Securing media streams in an Asterisk-based environment and evaluating the resulting performance cost

Clayton, Bradley 08 January 2007 (has links)
When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
112

A formalised ontology for network attack classification

Van Heerden, Renier Pelser January 2014 (has links)
One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
113

Log analysis aided by latent semantic mapping

Buys, Stephanus 14 April 2013 (has links)
In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. / LaTeX with hyperref package / Adobe Acrobat 9.54 Paper Capture Plug-in
114

Distributed authentication for resource control

Burdis, Keith Robert January 2000 (has links)
This thesis examines distributed authentication in the process of controlling computing resources. We investigate user sign-on and two of the main authentication technologies that can be used to control a resource through authentication and providing additional security services. The problems with the existing sign-on scenario are that users have too much credential information to manage and are prompted for this information too often. Single Sign-On (SSO) is a viable solution to this problem if physical procedures are introduced to minimise the risks associated with its use. The Generic Security Services API (GSS-API) provides security services in a manner in- dependent of the environment in which these security services are used, encapsulating security functionality and insulating users from changes in security technology. The un- derlying security functionality is provided by GSS-API mechanisms. We developed the Secure Remote Password GSS-API Mechanism (SRPGM) to provide a mechanism that has low infrastructure requirements, is password-based and does not require the use of long-term asymmetric keys. We provide implementations of the Java GSS-API bindings and the LIPKEY and SRPGM GSS-API mechanisms. The Secure Authentication and Security Layer (SASL) provides security to connection- based Internet protocols. After finding deficiencies in existing SASL mechanisms we de- veloped the Secure Remote Password SASL mechanism (SRP-SASL) that provides strong password-based authentication and countermeasures against known attacks, while still be- ing simple and easy to implement. We provide implementations of the Java SASL binding and several SASL mechanisms, including SRP-SASL.
115

Evaluation of virtual private network impact on network performance

Nawej, Mukatshung Claude 09 1900 (has links)
The aim of the study is to investigate what impact the use of VPN has on network performance. An empirical investigation using quantitative research methods was carried out. Two sample scenarios were involved in the study: scenario without VPN and scenario with VPN. In both scenarios, three applications were used in turns, an HTTP, an FTP, and a CBR. FTP was configured to use window size and packet size, while CBR used connection rate and packet size. On the other side, the number of connection was the only parameter used for HTTP. These applications were injected in a 100 Mbps fixed link of an NS2 simulation environment. Throughput and delay averages were measured respectively for the two scenarios and values compared using Student’s t-test. While the TCP and HTTP throughputs were found decreasing, the UDP throughput was not affected by the presence of this VPN. Concerning the delay; the TCP, UDP and HTTP delay were found increasing. / Electrical Engineering / M. Tech. (Electrical Engineering (Computer Systems))
116

Segurança em redes sem fio: estudo sobre o desenvolvimento de conjuntos de dados para comparação de IDS

Vilela, Douglas Willer Ferrari Luz [UNESP] 05 December 2014 (has links) (PDF)
Made available in DSpace on 2015-07-13T12:10:14Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-12-05. Added 1 bitstream(s) on 2015-07-13T12:25:33Z : No. of bitstreams: 1 000836349.pdf: 1934096 bytes, checksum: c3f7c0657f64390bf9abd2cc13136962 (MD5) / O crescimento vertiginoso da tecnologia de redes sem fio tem sido muito significativo nos últimos anos, sua utilização ocorre em diversos setores da sociedade. O padrão IEEE 802.11 destaca-se nesse cenário. No entanto, os mecanismos de proteção empregados por este padrão de rede sem fio não tem apresentado eficiência no combate a ataques de negação de serviço. Os sistemas de detecção de intrusão são vistos como uma forma eficaz de minimizar essas ameaças. Nesta pesquisa foi proposta a construção de três conjuntos de dados que represente de forma significativa o tráfego de rede sem fio. Os conjuntos gerados têm finalidade de auxiliar na avaliação de algoritmos de detecção de intrusos para redes sem fio. Para a construção dos conjuntos de dados foram implementados três cenários de redes sem fio, todos em ambientes reais e operacionais. Em cada cenário foi habilitado um mecanismo de segurança: cenário 1 protocolo WEP, cenário 2 foi utilizado IEEE 802.11i e cenário 3 o IEEE 802.11i associada à emenda IEEE 802.11w. A escolha por cenários diferentes e divisão dos conjuntos de acordo com os ambientes tem a finalidade analisar a evolução dos mecanismos de segurança. Com isto é possível categorizar cada ambiente. Após a construção dos ambientes de rede sem fio foi inoculado tráfego de rede normal e anômalo, com isto iniciou-se a coleta dos dados. Com os dados coletados foi realizado um pré-processamento de cada conjunto capturando apenas os quadros do cabeçalho Media Access Control - MAC do IEEE 802.11. A escolha foi definida em virtude de este quadro possuir características especifica das redes sem fio. Para validar os conjuntos de dados foram empregadosalgoritmos de classificação e reconhecimento de padrões. Os algoritmos empregados na validação foram Multilayer Perceptron - MLP, Radial Basis Function - RBF e Bayes Net. Os resultados obtidos com a avaliação dos conjuntos de dados gerados... / The fast growth of wireless network technology has been very significant lately, its occurs in diverse sectors of society. The standard IEEE 802.11 stands out in this scenario. However, the protection mechanisms employed by this standard wireless network has not shown effectiveness in combating denial of service attacks. The intrusion detection systems are seen as an effective way to minimize these threats. We proposed in this research to build three data sets, which represent traffic wireless network. The sets are generated auxiliary purpose in assessing intrusion detection algorithms for wireless networks. For the construction of the data sets three scenarios of wireless networks, all in real operational environments and have been implemented. In each scenario was one enabled security mechanisms: WEP protocol scenario 1, scenario 2 was used IEEE 802.11i scenario 3 the associated IEEE 802.11i amendment to the IEEE 802.11w. The choice of different sets of scenarios and divide according to the environments aims to analyze the evolution of the security mechanisms. This makes it possible to categorize each environment. After the construction of wireless network environments normal and anomalous traffic were inoculated and thus collect the data. With the collected data pre-processing each set only extracting the frames from the MAC header was conducted. The choice was defined as this has specific characteristics of wireless networks. To validate the data sets and sorting algorithms were employed pattern recognition. The algorithms were used in the validation MLP, RBF and Bayes Net. The results obtained from the evaluation of the generated data sets demonstrate that the proposed approach is quite promising
117

Analysis of a South African cyber-security awareness campaign for schools using interdisciplinary communications frameworks

Leppan, Claudette January 2017 (has links)
To provide structure to cyber awareness and educational initiatives in South Africa, Kortjan and Von Solms (2014) developed a five-layer cyber-security awareness and education framework. The purpose of the dissertation is to determine how the framework layers can be refined through the integration of communication theory, with the intention to contribute towards the practical implications of the framework. The study is approached qualitatively and uses a case study for argumentation to illustrate how the existing framework can be further developed. Drawing on several comprehensive campaign planning models, the dissertation illustrates that not all important campaign planning elements are currently included in the existing framework. Proposed changes in the preparation layer include incorporating a situational and target audience analysis, determining resources allocated for the campaign, and formulating a communication strategy. Proposed changes in the delivery layer of the framework are concerned with the implementation, monitoring and adjustment, as well as reporting of campaign successes and challenges. The dissertation builds on, and adds to, the growing literature on the development of campaigns for cyber-security awareness and education aimed at children.
118

User-centred design to engender trust in e-commerce

Obioha, Chinonye Leuna January 2016 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016. / Consumer trust is a core element for any e-commerce website. This study aimed to explore attributes of business-to-consumers (B2C) e-commerce websites that can communicate and engender trust from the users’ perspective using user-centred design. E-commerce websites are known to have features such as security certificates and encryption methods to ensure trust, but this requires technical knowhow to understand. The technologies used to develop websites have improved so far, but it has little effect on improving the trust of the users of e-commerce mostly in developing countries (Africa in particular). E-commerce users do not realise that these features have been put in place for the trustworthiness of the websites which contributes to their reluctance to conduct business transactions online, thus reducing their buying intentions. There is a need to design e-commerce websites to communicate/ convey trust from the users’ perspective. The study explored various sources of data to obtain insight and understanding of the research problem—user-centred design (UCD) group activity with users, interviews with developers, and secondary prior literature. Using UCD as the main methodology, an intensive UCD workshop activity with a group of eight e-commerce users was carried out. Furthermore, to obtain the view of experts (developers) on what is currently done to engender trust in B2C e-commerce websites, interviews with four respondents were also carried out. These interviews were intended to reduce any prejudice or bias and to obtain a clearer understanding of the phenomenon being studied. The findings from the study revealed six main attributes to engender trust, namely aesthetics design, security and information privacy, functionality design, trustworthiness based on content, development process, and vendor attributes. Proposed guidelines for each of the attributes were outlined. The findings from the users showed that those who were acquainted with the e-commerce technologies were those whose backgrounds are computer and technology related. Most users focused on aesthetics design, functionality, and security of their privacy and private details. Less emphasis was placed on the technology behind the e-commerce websites. Users use their aesthetic and cognitive value in their judgement for trust. The findings from the research were further validated using the Domestication of Technology Theory (DTT), which resulted in the development of a user-centred e-commerce trust model.
119

A framework for software patch management in a multi-vendor environment

Hughes, Grant Douglas January 2016 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016. / Software often requires patches to be installed post-implementation for a variety of reasons. Organisations and individuals, however, do not always promptly install these patches as and when they are released. This study investigated the reasons for the delay or hesitation, identified the challenges, and proposed a model that could assist organisations in overcoming the identified challenges. The research investigated the extent to which the integration of software patch management and enterprise data security is an important management responsibility, by reviewing relevant documents and interviewing key role players currently involved in the patch management process. The current challenges and complexities involved in patch management at an enterprise level could place organisations at risk by compromising their enterprise-data security. This research primarily sought to identify the challenges causing the management of software patches to be complex, and further attempted to establish how organisations currently implement patch management. The aim of the study was to explore the complexities of software patch management in order to enhance enterprise data security within organisations. A single case study was used, and data were obtained from primary sources and literature. The study considered both technological and human factors, and found that both factors play an equally important role with regard to the successful implementation of a patch management program within an organisation.
120

Managing infrastructure risks in information communication technology outsourced projects : a case study at Transnet, South Africa

Basson, Delton Jade January 2017 (has links)
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2017. / The balance between the dependency on Information and Communications Technology (ICT) and reducing costs has led to an increase in ICT outsourcing in many organisations. ICT outsourcing has benefits, but organisations have limited knowledge on information security and risks when outsourcing these functions. A lack of information security knowledge or a poor organisational risk culture carries the risk of project failure and security breaches. It is unclear how to manage information risks through the usage of ICT infrastructure risk management when outsourcing ICT projects, and this exposes organisations to ICT security risks. The aim of the study is to explore how a selected transport organisation can manage information risks through the usage of infrastructure risk management when outsourcing ICT projects. Two primary research questions are posed namely, “what information risks does the ICT department manage when outsourcing ICT projects?”, and “how can the ICT department protect their information through the usage of infrastructure risk management against ICT security threats when outsourcing ICT?” To answer these two questions, a study was conducted at a transport organisation in South Africa. A subjective ontological and interpretivist epistemological stance has been adopted and an inductive research approach was followed. The research strategy was a case study. Data for this study was gathered through interviews (17 in total) using semi-structured questionnaires. Data collected were transcribed, summarised, and categorised to provide a clear understanding of the data. For this study, forty findings and eight themes were identified. The themes are ICT outsourcing, information risks, costs, ICT vendor dependency, vendor access and management, risk management, user awareness, and frameworks. Guidelines are proposed, comprising six primary components. The results point to gaps that need to be addressed to ensure that information is protected when outsourcing ICT projects. Measures need to be put in place and communication has to be improved among operating divisions. The findings lead to questions such as, ““how does business create an ICT security culture to ensure that information is protected at all times”, and “does vendor access management really get the necessary attention it requires?” Further studies on human behaviour towards ICT security is needed to ensure the protection of organisations against security risks.

Page generated in 0.1301 seconds