• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 169
  • 19
  • 18
  • 9
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 279
  • 279
  • 279
  • 279
  • 82
  • 69
  • 59
  • 52
  • 41
  • 40
  • 39
  • 39
  • 38
  • 35
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Personal information security : legislation, awareness and attitude.

Parbanath, Steven. 01 October 2013 (has links)
Ecommerce refers to the buying and selling of products and services electronically via the Internet and other computer networks (Electronic Commerce 2011). The critical components of ecommerce are a well designed website and a merchant account for payment by the customer (Ecommerce critical components 2008). Merchants that sell their products and services via the Internet have a competitive edge over those that do not. It is therefore becoming common practice for both small and large business to transact electronically. With the vast opportunities, new risks and vulnerabilities are introduced. Consumers are reluctant to transact electronically because of the fear of unauthorized access and interception of confidential information (Online Banking Concerns 2011). Other fears include the changing of data with malicious intent, denial of use, hacking, deliberate disclosure of confidential information and e-mail associated risks (Safeena, Abdulla & Date 2010). The use of technology such as encryption and decryption has not adequately addressed these problems because fraudsters have found new and sophisticated methods of attaining consumer information illegally. Phishing is one such method. Phishing results in identity theft and financial fraud when the fraudster tricks the online users into giving their confidential information like passwords, identity numbers, credit card number and personal information such as birthdates and maiden names. The fraudster will then use the information to impersonate the victim to transfer funds from the victim‟s account or use the victim‟s information to make purchases (Srivastava 2007). Since 2002, many laws passed in South Africa have attempted to allay fears so that consumers can conduct business electronically with confidence. The following legislation aims to protect consumers: - The Electronic Communications and Transactions Act (Republic of South Africa 2002). - The Consumer Protection Act (Republic of South Africa 2008). - The Protection of Personal Information Bill which is expected to be passed in 2011 (Republic of South Africa 2009). This research aims to examine the extent to which these legislation can address the security concerns of consumers. The researcher is also interested in ascertaining how knowledgeable consumers are on these legislation and what their attitudes are towards their personal information security. / Thesis (M.Com.)-University of KwaZulu-Natal, Westville, 2011.
102

Architectural support for autonomic protection against stealth by rootkit exploits

Vasisht, Vikas R. 19 November 2008 (has links)
Operating system security has become a growing concern these days. As the complexity of software layers increases, the vulnerabilities that can be exploited by adversaries increases. Rootkits are gaining much attention these days in cyber-security. Rootkits are installed by an adversary after he/she gains elevated access to the computer system. Rootkits are used to maintain a consistent undetectable presence in the computer system and help as a toolkit to hide all the malware activities from the system administrator and anti-malware tools. Current defense mechanism used to prevent such activities is to strengthen the OS kernel and fix the known vulnerabilities. Software tools are developed at the OS or virtual machine monitor (VMM) levels to monitor the integrity of the kernel and try to catch any suspicious activity after infection. Recognizing the failure of software techniques and attempting to solve the endless war between the anti-rootkit and rootkit camps, in this thesis, we propose an autonomic architecture called SHARK, or Secure Hardware support Against RootKits. This new hardware architecture provides system-level security against the stealth activities of rootkits without trusting the entire software stack. It enhances the relationship of the OS and hardware and rules out the possibility of any hidden activity even when the OS is completely compromised. SHARK proposes a novel hardware manager that provides secure association with every software context making use of hardware resources. It helps system administrators to obtain feedback directly from the hardware to reveal all running processes. This direct feedback makes it impossible for rootkits to conceal running software contexts from the system administrator. We emulated the proposed architecture SHARK by using Bochs hardware simulator and a modified Linux kernel version 2.6.16.33 for the proposed architectural extension. In our emulated environment, we installed several real rootkits to compromise the kernel and concealed malware processes. SHARK is shown to be very effective in defending against a variety of rootkits employing different software schemes. Also, we performed performance analysis using SIMICS simulations and the results show a negligible overhead, making the proposed solution very practical.
103

Acquisition and diffusion of technology innovation

Ransbotham, Samuel B., III 31 March 2008 (has links)
In the first essay, I examine value created through external acquisition of nascent technology innovation. External acquisition of new technology is a growing trend in the innovation process, particularly in high technology industries, as firms complement internal efforts with aggressive acquisition programs. Yet, despite its importance, there is little empirical research on the timing of acquisition decisions in high technology environments. I examine the impact of target age on value created for the buyer. Applying an event study methodology to technology acquisitions in the telecommunications industry from 1995 to 2001, empirical evidence supports acquiring early in the face of uncertainty. The equity markets reward the acquisition of younger companies. In sharp contrast to the first essay, the second essay examines the diffusion of negative innovations. While destruction can be creative, certainly not all destruction is creative. Some is just destruction. I examine two fundamentally different paths to information security compromise an opportunistic path and a deliberate path. Through a grounded approach using interviews, observations, and secondary data, I advance a model of the information security compromise process. Using one year of alert data from intrusion detection devices, empirical analysis provides evidence that these paths follow two distinct, but interrelated diffusion patterns. Although distinct, I find empirical evidence that these paths both converge and escalate. Beyond the specific findings in the Internet security context, the study leads to a richer understanding of the diffusion of negative technological innovation. In the third essay, I build on the second essay by examining the effectiveness of reward-based mechanisms in restricting the diffusion of negative innovations. Concerns have been raised that reward-based private infomediaries introduce information leakage which decreases social welfare. Using two years of alert data, I find evidence of their effectiveness despite any leakage which may be occurring. While reward-based disclosures are just as likely to be exploited as non-reward-baed disclosures, exploits from reward-based disclosures are less likely to occur in the first week after disclosure. Further the overall volume of alerts is reduced. This research helps determine the effectiveness of reward mechanisms and provides guidance for security policy makers.
104

Framework for botnet emulation and analysis

Lee, Christopher Patrick 12 March 2009 (has links)
Criminals use the anonymity and pervasiveness of the Internet to commit fraud, extortion, and theft. Botnets are used as the primary tool for this criminal activity. Botnets allow criminals to accumulate and covertly control multiple Internet-connected computers. They use this network of controlled computers to flood networks with traffic from multiple sources, send spam, spread infection, spy on users, commit click fraud, run adware, and host phishing sites. This presents serious privacy risks and financial burdens to businesses and individuals. Furthermore, all indicators show that the problem is worsening because the research and development cycle of the criminal industry is faster than that of security research. To enable researchers to measure botnet connection models and counter-measures, a flexible, rapidly augmentable framework for creating test botnets is provided. This botnet framework, written in the Ruby language, enables researchers to run a botnet on a closed network and to rapidly implement new communication, spreading, control, and attack mechanisms for study. This is a significant improvement over augmenting C++ code-bases for the most popular botnets, Agobot and SDBot. Rubot allows researchers to implement new threats and their corresponding defenses before the criminal industry can. The Rubot experiment framework includes models for some of the latest trends in botnet operation such as peer-to-peer based control, fast-flux DNS, and periodic updates. Our approach implements the key network features from existing botnets and provides the required infrastructure to run the botnet in a closed environment.
105

Knowledge based anomaly detection

Prayote, Akara, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Traffic anomaly detection is a standard task for network administrators, who with experience can generally differentiate anomalous traffic from normal traffic. Many approaches have been proposed to automate this task. Most of them attempt to develop a sufficiently sophisticated model to represent the full range of normal traffic behaviour. There are significant disadvantages to this approach. Firstly, a large amount of training data for all acceptable traffic patterns is required to train the model. For example, it can be perfectly obvious to an administrator how traffic changes on public holidays, but very difficult, if not impossible, for a general model to learn to cover such irregular or ad-hoc situations. In contrast, in the proposed method, a number of models are gradually created to cover a variety of seen patterns, while in use. Each model covers a specific region in the problem space. Any novel or ad-hoc patterns can be covered easily. The underlying technique is a knowledge acquisition approach named Ripple Down Rules. In essence we use Ripple Down Rules to partition a domain, and add new partitions as new situations are identified. Within each supposedly homogeneous partition we use fairly simple statistical techniques to identify anomalous data. The special feature of these statistics is that they are reasonably robust with small amounts of data. This critical situation occurs whenever a new partition is added. We have developed a two knowledge base approach. One knowledge base partitions the domain. Within each domain statistics are accumulated on a number of different parameters. The resultant data are passed to a knowledge base which decides whether enough parameters are anomalous to raise an alarm. We evaluated the approach on real network data. The results compare favourably with other techniques, but with the advantage that the RDR approach allows new patterns of use to be rapidly added to the model. We also used the approach to extend previous work on prudent expert systems - expert systems that warn when a case is outside its range of experience. Of particular significance we were able to reduce the false positive to about 5%.
106

Architectural support for autonomic protection against stealth by rootkit exploits

Vasisht, Vikas R.. January 2008 (has links)
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Lee, Hsien-Hsin; Committee Member: Blough, Douglas; Committee Member: Copeland, John. Part of the SMARTech Electronic Thesis and Dissertation Collection.
107

Automatic identification and removal of low quality online information

Webb, Steve. January 2008 (has links)
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009. / Committee Chair: Pu, Calton; Committee Member: Ahamad, Mustaque; Committee Member: Feamster, Nick; Committee Member: Liu, Ling; Committee Member: Wu, Shyhtsun Felix. Part of the SMARTech Electronic Thesis and Dissertation Collection.
108

Securing media streams in an Asterisk-based environment and evaluating the resulting performance cost

Clayton, Bradley 08 January 2007 (has links)
When adding Confidentiality, Integrity and Availability (CIA) to a multi-user VoIP (Voice over IP) system, performance and quality are at risk. The aim of this study is twofold. Firstly, it describes current methods suitable to secure voice streams within a VoIP system and make them available in an Asterisk-based VoIP environment. (Asterisk is a well established, open-source, TDM/VoIP PBX.) Secondly, this study evaluates the performance cost incurred after implementing each security method within the Asterisk-based system, using a special testbed suite, named DRAPA, which was developed expressly for this study. The three security methods implemented and studied were IPSec (Internet Protocol Security), SRTP (Secure Real-time Transport Protocol), and SIAX2 (Secure Inter-Asterisk eXchange 2 protocol). From the experiments, it was found that bandwidth and CPU usage were significantly affected by the addition of CIA. In ranking the three security methods in terms of these two resources, it was found that SRTP incurs the least bandwidth overhead, followed by SIAX2 and then IPSec. Where CPU utilisation is concerned, it was found that SIAX2 incurs the least overhead, followed by IPSec, and then SRTP.
109

A formalised ontology for network attack classification

Van Heerden, Renier Pelser January 2014 (has links)
One of the most popular attack vectors against computers are their network connections. Attacks on computers through their networks are commonplace and have various levels of complexity. This research formally describes network-based computer attacks in the form of a story, formally and within an ontology. The ontology categorises network attacks where attack scenarios are the focal class. This class consists of: Denial-of- Service, Industrial Espionage, Web Defacement, Unauthorised Data Access, Financial Theft, Industrial Sabotage, Cyber-Warfare, Resource Theft, System Compromise, and Runaway Malware. This ontology was developed by building a taxonomy and a temporal network attack model. Network attack instances (also know as individuals) are classified according to their respective attack scenarios, with the use of an automated reasoner within the ontology. The automated reasoner deductions are verified formally; and via the automated reasoner, a relaxed set of scenarios is determined, which is relevant in a near real-time environment. A prototype system (called Aeneas) was developed to classify network-based attacks. Aeneas integrates the sensors into a detection system that can classify network attacks in a near real-time environment. To verify the ontology and the prototype Aeneas, a virtual test bed was developed in which network-based attacks were generated to verify the detection system. Aeneas was able to detect incoming attacks and classify them according to their scenario. The novel part of this research is the attack scenarios that are described in the form of a story, as well as formally and in an ontology. The ontology is used in a novel way to determine to which class attack instances belong and how the network attack ontology is affected in a near real-time environment.
110

Log analysis aided by latent semantic mapping

Buys, Stephanus 14 April 2013 (has links)
In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. / LaTeX with hyperref package / Adobe Acrobat 9.54 Paper Capture Plug-in

Page generated in 0.0688 seconds