• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 21
  • 20
  • 12
  • 9
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 315
  • 315
  • 293
  • 292
  • 88
  • 80
  • 63
  • 58
  • 45
  • 44
  • 44
  • 42
  • 40
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

The ISO/IEC 27002 and ISO/IEC 27799 information security management standards : a comparative analysis from a healthcare perspective

Ngqondi, Tembisa Grace January 2009 (has links)
Technological shift has become significant and an area of concern in the health sector with regard to securing health information assets. Health information systems hosting personal health information expose these information assets to ever-evolving threats. This information includes aspects of an extremely sensitive nature, for example, a particular patient may have a history of drug abuse, which would be reflected in the patient’s medical record. The private nature of patient information places a higher demand on the need to ensure privacy. Ensuring that the security and privacy of health information remain intact is therefore vital in the healthcare environment. In order to protect information appropriately and effectively, good information security management practices should be followed. To this end, the International Organization for Standardization (ISO) published a code of practice for information security management, namely the ISO 27002 (2005). This standard is widely used in industry but is a generic standard aimed at all industries. Therefore it does not consider the unique security needs of a particular environment. Because of the unique nature of personal health information and its security and privacy requirements, the need to introduce a healthcare sector-specific standard for information security management was identified. The ISO 27799 was therefore published as an industry-specific variant of the ISO 27002 which is geared towards addressing security requirements in health informatics. It serves as an implementation guide for the ISO 27002 when implemented in the health sector. The publication of the ISO 27799 is considered as a positive development in the quest to improve health information security. However, the question arises whether the ISO 27799 addresses the security needs of the healthcare domain sufficiently. The extensive use of the ISO 27002 implies that many proponents of this standard (in healthcare), now have to ensure that they meet the (assumed) increased requirements of the ISO 27799. The purpose of this research is therefore to conduct a comprehensive comparison of the ISO 27002 and ISO 27799 standards to determine whether the ISO 27799 serves the specific needs of the health sector from an information security management point of view.
162

A model for legal compliance in the South African banking sector : an information security perspective

Maphakela, Madidimalo Rabbie January 2008 (has links)
In the past, many organisations used to keep their information on paper, which resulted in the loss of important information. In today’s knowledge era the information super-highway facilitates highly connected electronic environments where business applications can communicate on an intra- as well as inter-organizational level. As business expanded more into the cyber-world, so did the need to protect the information they have. Technology advances did not only bring benefits, it also increased the vulnerability of companies’ information. Information, the lifeblood of an organization, must be protected from threats such as hackers and fraud, amongst others. In the highly regulated financial sector, the protection of information is not only a best practice, but a legal obligation carrying penalties for non-compliance. From a positive aspect, organisations can identify security controls that can help them to secure their information, with the aid of legal sources. But organisations find themselves burdened by a burgeoning number of legal sources and requirements, which require vast resources and often become unmanageable. This research focuses on finding a solution for South African banks to comply with multiple legal sources, as seen from an information security perspective.
163

A national strategy towards cultivating a cybersecurity culture in South Africa

Gcaza, Noluxolo January 2017 (has links)
In modern society, cyberspace is interwoven into the daily lives of many. Cyberspace is increasingly redefining how people communicate as well as gain access to and share information. Technology has transformed the way the business world operates by introducing new ways of trading goods and services whilst bolstering traditional business methods. It has also altered the way nations govern. Thus individuals, organisations and nations are relying on this technology to perform significant functions. Alongside the positive innovations afforded by cyberspace, however, those who use it are exposed to a variety of risks. Cyberspace is beset by criminal activities such as cybercrime, fraud, identity theft to name but a few. Nonetheless, the negative impact of these cyber threats does not outweigh the advantages of cyberspace. In light of such threats, there is a call for all entities that reap the benefits of online services to institute cybersecurity. As such, cybersecurity is a necessity for individuals, organisations and nations alike. In practice, cybersecurity focuses on preventing and mitigating certain security risks that might compromise the security of relevant assets. For a long time, technology-centred measures have been deemed the most significant solution for mitigating such risks. However, after a legacy of unsuccessful technological efforts, it became clear that such solutions in isolation are insufficient to mitigate all cyber-related risks. This is mainly due to the role that humans play in the security process, that is, the human factor. In isolation, technology-centred measures tend to fail to counter the human factor because of the perception among many users that security measures are an obstacle and consequently a waste of time. This user perception can be credited to the perceived difficulty of the security measure, as well as apparent mistrust and misinterpretation of the measure. Hence, cybersecurity necessitates the development of a solution that encourages acceptable user behaviour in the reality of cyberspace. The cultivation of a cybersecurity culture is thus regarded as the best approach for addressing the human factors that weaken the cybersecurity chain. While the role of culture in pursuing cybersecurity is well appreciated, research focusing on defining and measuring cybersecurity culture is still in its infancy. Furthermore, studies have shown that there are no widely accepted key concepts that delimit a cybersecurity culture. However, the notion that such a culture is not well-delineated has not prevented national governments from pursuing a culture in which all citizens behave in a way that promotes cybersecurity. As a result, many countries now offer national cybersecurity campaigns to foster a culture of cybersecurity at a national level. South Africa is among the nations that have identified cultivating a culture of cybersecurity as a strategic priority. However, there is an apparent lack of a practical plan to cultivate such a cybersecurity culture in South Africa. Thus, this study sought firstly to confirm from the existing body of knowledge that cybersecurity culture is indeed ill-defined and, secondly, to delineate what constitutes a national cybersecurity culture. Finally, and primarily, it sought to devise a national strategy that would assist SA in fulfilling its objective of cultivating a culture of cybersecurity on a national level.
164

The computer incident response framework (CIRF)

Pieterse, Theron Anton 10 October 2014 (has links)
M.Com. (Informatics) / A company’s valuable information assets face many risks from internal and external sources. When these risks are exploited and reports on information assets are made public, it is usually easy to determine which companies had a contingency plan to deal with the various aspects of these “computer incidents”. This study incorporates important factors of computer incidents into a framework which will assists the company in effectively dealing and managing computer incidents when they occur.
165

An incremental approach to a secure e-commerce environment

Mapeka, Kgabo Elizabeth 07 October 2014 (has links)
M.Sc. (Computer Science) / The terms "Electronic Commerce" and "Internet Commerce" are often used interchangeably to mean similar processes. By definition, electronic commerce (e-commerce) means any exchange of information that occurs electronically. There are various types of electronic commerce transactions to name a few; electronic data interchange (EDI), fax, electronic funds transfer, interorganisational systems, technical data and document exchange, customer credit approval systems, interaction with customers and vendors, etc ([151, p. 27). The term internet commerce evolved with the era of the Internet. It became evident that both business and consumers are gradually conducting business via the Internet. For the purpose of this dissertation the term e-commerce will be used to refer to both electronic commerce and Internet commerce. The aim of this dissertation is to give guidance to organisations or individuals wishing to build a secure electronic commerce environment. This will be achieved by presenting an incremental phase by phase reference model. The model gives guidance on how to establish a network (local area network) with the intention to expand it through various phases to a complete, secure electronic commerce environment in the future. The dissertation will be discussed in the ten chapters outlined below. These chapters are discussed in detail in chapter 1. Chapter 1 sets out the problem addressed in this dissertation, the main objective of the dissertation and its structure. Chapter 2 introduces the framework of the reference model. It presents the different phases of the e-commerce reference model. Chapters 3 to 8 outline the phases of the e-commerce reference model in detail.
166

Modeling personally identifiable information leakage that occurs through the use of online social networks

Louw, Candice 30 June 2015 (has links)
M.Sc. (Computer Science) / With the phenomenal growth of the Online Social Network (OSN) industry in the past few years, users have resorted to storing vast amounts of personal information on these sites. The information stored on these sites is often readily accessible from anywhere in the world and not always protected by adequate security settings. As a result, user information can make its way, unintentionally, into the hands of not only other online users, but also online abusers. Online abusers, better known as cyber criminals, exploit user information to commit acts of identity theft, Advanced Persistent Threats (APTs) and password recovery, to mention only a few. As OSN users are incapable of visualising the process of access to their OSN information, they may choose to never adjust their security settings. This can become synonymous with ultimately setting themselves up to becoming a victim of cyber crime. In this dissertation we aim to address this problem by proposing a prototype system, the Information Deduction Model (IDM) that can visualise and simulate the process of accessing information on an OSN profile. By visually explaining concepts such as information access, deduction and leakage, we aim to provide users with a tool that will enable them to make more informed choices about the security settings on their OSN profiles thereby setting themselves up for a pleasant online experience.
167

MOSS : a model for open system security

Van Zyl, Pieter Willem Jordaan 12 September 2012 (has links)
Ph.D / This thesis looks at current security problems within open system environments, that is security problems within heterogeneous computer system environments that are interconnected via computer networks. Thereafter two security models, Kerberos and the Path Context Model, are considered together with their respective ability to address these security problems. Using concepts of the Path Context Model, a new security model, called MOSS (Model for Open System Security), is developed and it is shown how MOSS can address all the security problems identified. Two possible implementations of MOSS are then considered: the one is based on the concept of Static Security Agents (SSAs) for contemporary open system environments, and the other is based on the concept of Roaming Security Agents (RSAs) for object orientated open system environments. The research is concluded with a summary of possible future research considerations
168

Flexible Digital Authentication Techniques

Ge, He 05 1900 (has links)
Abstract This dissertation investigates authentication techniques in some emerging areas. Specifically, authentication schemes have been proposed that are well-suited for embedded systems, and privacy-respecting pay Web sites. With embedded systems, a person could own several devices which are capable of communication and interaction, but these devices use embedded processors whose computational capabilities are limited as compared to desktop computers. Examples of this scenario include entertainment devices or appliances owned by a consumer, multiple control and sensor systems in an automobile or airplane, and environmental controls in a building. An efficient public key cryptosystem has been devised, which provides a complete solution to an embedded system, including protocols for authentication, authenticated key exchange, encryption, and revocation. The new construction is especially suitable for the devices with constrained computing capabilities and resources. Compared with other available authentication schemes, such as X.509, identity-based encryption, etc, the new construction provides unique features such as simplicity, efficiency, forward secrecy, and an efficient re-keying mechanism. In the application scenario for a pay Web site, users may be sensitive about their privacy, and do not wish their behaviors to be tracked by Web sites. Thus, an anonymous authentication scheme is desirable in this case. That is, a user can prove his/her authenticity without revealing his/her identity. On the other hand, the Web site owner would like to prevent a bunch of users from sharing a single subscription while hiding behind user anonymity. The Web site should be able to detect these possible malicious behaviors, and exclude corrupted users from future service. This dissertation extensively discusses anonymous authentication techniques, such as group signature, direct anonymous attestation, and traceable signature. Three anonymous authentication schemes have been proposed, which include a group signature scheme with signature claiming and variable linkability, a scheme for direct anonymous attestation in trusted computing platforms with sign and verify protocols nearly seven times more efficient than the current solution, and a state-of-the-art traceable signature scheme with support for variable anonymity. These three schemes greatly advance research in the area of anonymous authentication. The authentication techniques presented in this dissertation are based on common mathematical and cryptographical foundations, sharing similar security assumptions. We call them flexible digital authentication schemes.
169

Computação autonômica aplicada ao diagnóstico e solução de anomalias de redes de computadores / Autonomic computing applied to the diagnosis and solution of network anomalies

Amaral, Alexandre de Aguiar, 1986- 27 August 2018 (has links)
Orientadores: Leonardo de Souza Mendes, Mario Lemes Proença Junior / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-27T01:36:35Z (GMT). No. of bitstreams: 1 Amaral_AlexandredeAguiar_D.pdf: 3847801 bytes, checksum: 71773e4b12743836bc5dc38e572c1c63 (MD5) Previous issue date: 2015 / Resumo: A tarefa de gerenciamento de redes tem se tornado cada vez mais desafiadora. Dentre esses desafios está o problema de diagnosticar e solucionar as anomalias. As soluções atuais não têm sido suficiente para atender os requisitos demandados para a aplicação em ambientes de rede de grande escala. Os principais motivos decorrem da falta de autonomicidade e escalabilidade. Nesta tese, os conceitos da computação autonômica e distribuída são explorados para diagnosticar e solucionar anomalias de rede em tempo real. A proposta é constituída de entidades autonômicas hierarquicamente distribuídas, responsáveis por detectar e reparar as anomalias nas suas regiões de domínio com a mínima intervenção humana. Isto permite a escalabilidade, viabilizando a implantação do sistema em redes de grande escala. A autonomicidade das entidades autonômicas reduz intervenções manuais e a probabilidade de erros na análise e tomada de decisão, fazendo com que a complexidade percebida pela gerência no processo de detecção de anomalias seja reduzida. Experimentos foram realizados em duas diferentes redes: Universidade Tecnológica Federal do Paraná ¿ Campus Toledo e no Instituto Federal de Santa Catarina ¿ Campus GW. Os resultados demonstraram a eficácia e autonomicidade da solução para detectar e tratar diferentes anomalias em tempo real, com a mínima intervenção humana / Abstract: The challenges inherent to network administration increase daily. Among these challenges, there is the problem of diagnosing and repairing network anomalies. Current solutions have not been enough to meet the requirements of large scale networks. The main reasons stem from the lack of autonomicity and scalability. In this thesis, autonomic and distributed computing concepts are exploited presenting a solution to diagnose and treat network anomalies in real time. In this pro-posal, autonomic entities are hierarchically distributed, being responsible for detecting and repair-ing the anomalies in their domain, with minimal human intervention. This provides scalability, enabling the system to be deployed in large scale networks. The autonomic entities autonomicity reduces the manual intervention and the likelihood of errors in the analysis and decision process, minimizing the complexity perceived by the network management in the anomaly detection pro-cedure. Experiments were performed at two different networks: Federal University of Technolo-gy Paraná (UTFPR) - Toledo Campus and at the Federal Institute of Science and Technology Santa of Catarina - GW Campus. The results demonstrated the efficacy of the solution and its autonomicity to detect and repair various anomalies in real time, with minimal human interven-tion / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
170

Structure and Feedback in Cloud Service API Fuzzing

Atlidakis, Evangelos January 2021 (has links)
Over the last decade, we have witnessed an explosion in cloud services for hosting software applications (Software-as-a-Service), for building distributed services (Platform- as-a-Service), and for providing general computing infrastructure (Infrastructure-as-a- Service). Today, most cloud services are programmatically accessed through Application Programming Interfaces (APIs) that follow the REpresentational State Trans- fer (REST) software architectural style and cloud service developers use interface-description languages to describe and document their services. My thesis is that we can leverage the structured usage of cloud services through REST APIs and feedback obtained during interaction with such services in order to build systems that test cloud services in an automatic, efficient, and learning-based way through their APIs. In this dissertation, I introduce stateful REST API fuzzing and describe its implementation in RESTler: the first stateful REST API fuzzing system. Stateful means that RESTler attempts to explore latent service states that are reachable only with sequences of multiple interdependent API requests. I then describe how stateful REST API fuzzing can be extended with active property checkers that test for violations of desirable REST API security properties. Finally, I introduce Pythia, a new fuzzing system that augments stateful REST API fuzzing with coverage-guided feedback and learning-based mutations.

Page generated in 0.0574 seconds