• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 59
  • 59
  • 59
  • 40
  • 19
  • 15
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Changing Privacy Concerns in the Internet Era.

Demir, Irfan 08 1900 (has links)
Privacy has always been a respected value regardless of national borders, cultural differences, and time in every society throughout history. This study focuses on the unprecedented changes in the traditional forms of privacy and consequent concerns with regard to invasion of privacy along with the recent emergence and wide use of the Internet. Government intrusion into private domains through the Internet is examined as a major concern. Privacy invasions by Web marketers, hacker threats against privacy, and employer invasion of employee privacy at the workplace are discussed respectively. Then a set of possible solutions to solve the current problems and alleviate the concerns in this field is offered. Legal remedies that need to be performed by the government are presented as the initial solution. Then encryption is introduced as a strong technical method that may be helpful. Finally, a set of individual measures emphasized as complementary practical necessities. Nevertheless, this study indicates that technology will keep making further changes in the form and concerns of privacy that possibly may outdate these findings in the near future, however, privacy itself will always remain as a cherished social value as it has always been so far.
42

Using Machine Learning to improve Internet Privacy

Zimmeck, Sebastian January 2017 (has links)
Internet privacy lacks transparency, choice, quantifiability, and accountability, especially, as the deployment of machine learning technologies becomes mainstream. However, these technologies can be both privacy-invasive as well as privacy-protective. This dissertation advances the thesis that machine learning can be used for purposes of improving Internet privacy. Starting with a case study that shows how the potential of a social network to learn ethnicity and gender of its users from geotags can be estimated, various strands of machine learning technologies to further privacy are explored. While the quantification of privacy is the subject of well-known privacy metrics, such as k-anonymity or differential privacy, I discuss how some of those metrics can be leveraged in tandem with machine learning algorithms for purposes of quantifying the privacy-invasiveness of data collection practices. Further, I demonstrate how the current notice-and-choice paradigm can be realized by automatic machine learning privacy policy analysis. The implemented system notifies users efficiently and accurately on applicable data practices. Further, by analyzing software data flows users are enabled to compare actual to described data practices and regulators can enforce those at scale. The emerging cross-device tracking practices of ad networks, analytics companies, and others can be supplemented by machine learning technologies as well to notify users of privacy practices across devices and give them the choice they are entitled to by law. Ultimately, cross-device tracking is a harbinger of the emerging Internet of Things, for which I envision intelligent personal assistants that help users navigating through the increasing complexity of privacy notices and choices.
43

End-to-End Security of Information Flow in Web-based Applications

Singaravelu, Lenin 25 June 2007 (has links)
Web-based applications and services are increasingly being used in security-sensitive tasks. Current security protocols rely on two crucial assumptions to protect the confidentiality and integrity of information: First, they assume that end-point software used to handle security-sensitive information is free from vulnerabilities. Secondly, these protocols assume point-to-point communication between a client and a service provider. However, these assumptions do not hold true with large and complex vulnerable end point software such as the Internet browser or web services middleware or in web service compositions where there can be multiple value-adding service providers interposed between a client and the original service provider. To address the problem of large and complex end-point software, we present the AppCore approach which uses manual analysis of information flow, as opposed to purely automated approaches, to split existing software into two parts: a simplified trusted part that handles security-sensitive information and a legacy, untrusted part that handles non-sensitive information without access to sensitive information. Not only does this approach avoid many common and well-known vulnerabilities in the legacy software that compromised sensitive information, it also greatly reduces the size and complexity of the trusted code, thereby making exhaustive testing or formal analysis more feasible. We demonstrate the feasibility of the AppCore approach by constructing AppCores for two real-world applications: a client-side AppCore for https-based applications and an AppCore for web service platforms. Our evaluation shows that security improvements and complexity reductions (over a factor of five) can be attained with minimal modifications to existing software (a few tens of lines of code, and proxy settings of a browser) and an acceptable performance overhead (a few percent). To protect the communication of sensitive information between the clients and service providers in web service compositions, we present an end-to-end security framework called WS-FESec that provides end-to-end security properties even in the presence of misbehaving intermediate services. We show that WS-FESec is flexible enough to support the lattice model of secure information flow and it guarantees precise security properties for each component service at a modest cost of a few milliseconds per signature or encrypted field.
44

Modeling and Simulations of Worms and Mitigation Techniques

Abdelhafez, Mohamed 14 November 2007 (has links)
Internet worm attacks have become increasingly more frequent and have had a major impact on the economy, making the detection and prevention of these attacks a top security concern. Several countermeasures have been proposed and evaluated in recent literature. However, the eect of these proposed defensive mechanisms on legitimate competing traffic has not been analyzed. The first contribution of this thesis is a comparative analysis of the effectiveness of several of these proposed mechanisms, including a measure of their effect on normal web browsing activities. In addition, we introduce a new defensive approach that can easily be implemented on existing hosts, and which significantly reduces the rate of spread of worms using TCP connections to perform the infiltration. Our approach has no measurable effect on legitimate traffic. The second contribution is presenting a variant of the flash worm that we term Compact Flash or CFlash that is capable of spreading even faster than its predecessor. We perform a comparative study between the flash worm and the CFlash worm using a full-detail packet-level simulator, and the results show the increase in propagation rate of the new worm given the same set of parameters. The third contribution is the study of the behavior of TCP based worms in MANETs. We develop an analytical model for the worm spread of TCP worms in the MANETs environment that accounts for payloadsize, bandwidthsharing, radio range, nodal density and several other parameters specific for MANET topologies. We also present numerical solutions for the model and verify the results using packetlevel simulations. The results show that the analytical model developed here matches the results of the packetlevel simulation in most cases.
45

A study of South African computer usersʹ password usage habits and attitude towards password security

Friedman, Brandon January 2014 (has links)
The challenge of having to create and remember a secure password for each user account has become a problem for many computer users and can lead to bad password management practices. Simpler and less secure passwords are often selected and are regularly reused across multiple user accounts. Computer users within corporations and institutions are subject to password policies, policies which require users to create passwords of a specified length and composition and change passwords regularly. These policies often prevent users from reusing previous selected passwords. Security vendors and professionals have sought to improve or even replace password authentication. Technologies such as multi-factor authentication and single sign-on have been developed to complement or even replace password authentication. The objective of the study was to investigate the password habits of South African computer and internet users. The aim was to assess their attitudes toward password security, to determine whether password policies affect the manner in which they manage their passwords and to investigate their exposure to alternate authentication technologies. The results from the online survey demonstrated that password practices of the participants across their professional and personal contexts were generally insecure. Participants often used shorter, simpler and ultimately less secure passwords. Participants would try to memorise all of their passwords or reuse the same password on most of their accounts. Many participants had not received any security awareness training, and additional security technologies (such as multi-factor authentication or password managers) were seldom used or provided to them. The password policies encountered by the participants in their organisations did little towards encouraging the users to apply more secure password practices. Users lack the knowledge and understanding about password security as they had received little or no training pertaining to it.
46

A model for security incident response in the South African National Research and Education network

Mooi, Roderick David January 2014 (has links)
This dissertation addresses the problem of a lack of a formal incident response capability in the South African National Research and Education Network (SA NREN). While investigating alternatives it was found that no clear method exists to solve this problem. Therefore, a second problem is identified: the lack of a definitive method for establishing a Computer Security Incident Response Team (CSIRT) or Computer Emergency Response Team (CERT) in general. Solving the second problem is important as we then have a means of knowing how to start when building a CSIRT. This will set the basis for addressing the initial problem, resulting in a prepared, improved and coordinated response to IT security incidents affecting the SANREN. To commence, the requirements for establishing a CSIRT are identified via a comprehensive literature review. These requirements are categorized into five areas, namely, the basic business requirements followed by the four Ps of the IT Infrastructure Library (ITIL). That is, People, Processes, Product and Partners, adapted to suit the CSIRT context. Through the use of argumentation, the relationships between the areas are uncovered and explored. Thereafter, a Design Science Research-based process is utilised to develop a generic model for establishing a CSIRT. The model is based on the interactions uncovered between the business requirements and the adapted four Ps. These are summarised through two views -- strategic and tactical -- together forming an holistic model for establishing a CSIRT. The model highlights the decisions required for the business requirements, services, team model and staff, policies and processes, tools and technologies, and partners of a CSIRT respectively. Finally, to address the primary objective, the generic model is applied to the SANREN environment. Thus, the second artefact is an instantiation, a specific model, which can be implemented to create a CSIRT for the SA NREN. To produce the specific model, insight into the nature of the SANREN environment was required. The status quo was revealed through the use of a survey and argumentative analysis of the results. The specific decisions in each area required to establish an SA NREN CSIRT are explored throughout the development of the model. The result is a comprehensive framework for implementing a CSIRT in the SA NREN, detailing the decisions required in each of the areas. This model additionally acts as a demonstration of the utility of the generic model. The implications of this research are twofold. Firstly, the generic model is useful as a basis for anyone wanting to establish a CSIRT. It helps to ensure that all factors are considered and that no important decisions are neglected, thereby enabling an holistic view. Secondly, the specific model for the SA NREN CSIRT serves as a foundation for implementing the CSIRT going forward. It accelerates the process by addressing the important considerations and highlighting the concerns that must be addressed while establishing the CSIRT.
47

Digital forensic model for computer networks

Sanyamahwe, Tendai January 2011 (has links)
The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project.
48

The role of risk perception in Internet purchasing behaviour and intention

De Villiers, R. R. (Raoul Reenen) 12 1900 (has links)
Thesis (MComm.)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: In recent years the importance and number of users of electronic commerce and its medium, the Internet, have grown substantially. Despite this, the Business-to- Consumer sector has shown slow expansion and limited growth, with the majority of consumers slow to adopt the Internet as a medium for purchase. A probable factor affecting the purchasing behaviour of individuals is the perception of risk of a breach in (credit card) security and/or a violation of privacy. The research discussed here indicates that two closely related constructs, namely perceived privacy risk and perceived security risk exerts an influence on the Internet purchasing behaviour of Internet users, and more importantly, the intention to purchase. In addition, the role of social pressures regarding the provision of personal and credit card information is indicated to be of considerable importance. / AFRIKAANSE OPSOMMING: Die afgelope aantal jare het die belangrikheid en gebruik van eletroniese handel en die Internet aansienlik toegeneem. Ongeag hierdie groei het die sektor gemoeid met die handel tussen besighede en verbruikers egter beperkte groei getoon. 'n Waarskynlike rede vir die tendens in Internet aankoop gedrag is die persepsie dat daar 'n risiko is van misbruik van 'n krediet kaart sowel as misbruik en skending van privaatheid. Die studie wat hier bespreek word toon aan dat twee nou verwante kostrukte, naamlik persepsie van sekuriteits- en persepsie van privaatheidsrisiko 'n rol speel in die bepaling van Internet aankoop gedrag, sowel as die intensie om te koop. Verder is die rol van sosiale druk rakende die verskaffing van persoonlike en krediet kaart inligting uitgelig as 'n faktor van uiterste belang.
49

Internet payment system--: mechanism, applications & experimentation.

January 2000 (has links)
Ka-Lung Chong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 80-83). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction & Motivation --- p.1 / Chapter 1.1 --- Introduction --- p.1 / Chapter 1.2 --- Internet Commerce --- p.3 / Chapter 1.3 --- Motivation --- p.6 / Chapter 1.4 --- Related Work --- p.7 / Chapter 1.4.1 --- Cryptographic Techniques --- p.7 / Chapter 1.4.2 --- Internet Payment Systems --- p.9 / Chapter 1.5 --- Contribution --- p.16 / Chapter 1.6 --- Outline of the Thesis --- p.17 / Chapter 2 --- A New Payment Model --- p.19 / Chapter 2.1 --- Model Description --- p.19 / Chapter 2.2 --- Characteristics of Our Model --- p.22 / Chapter 2.3 --- Model Architecture --- p.24 / Chapter 2.4 --- Comparison --- p.30 / Chapter 2.5 --- System Implementation --- p.30 / Chapter 2.5.1 --- Acquirer Interface --- p.31 / Chapter 2.5.2 --- Issuer Interface --- p.32 / Chapter 2.5.3 --- Merchant Interface --- p.32 / Chapter 2.5.4 --- Payment Gateway Interface --- p.33 / Chapter 2.5.5 --- Payment Cancellation Interface --- p.33 / Chapter 3 --- A E-Commerce Application - TravelNet --- p.35 / Chapter 3.1 --- System Architecture --- p.35 / Chapter 3.2 --- System Features --- p.38 / Chapter 3.3 --- System Snapshots --- p.39 / Chapter 4 --- Simulation --- p.44 / Chapter 4.1 --- Objective --- p.44 / Chapter 4.2 --- Simulation Flow --- p.45 / Chapter 4.3 --- Assumptions --- p.49 / Chapter 4.4 --- Simulation of Payment Systems --- p.50 / Chapter 5 --- Discussion of Security Concerns --- p.54 / Chapter 5.1 --- Threats to Internet Payment --- p.54 / Chapter 5.1.1 --- Eavesdropping --- p.55 / Chapter 5.1.2 --- Masquerading --- p.55 / Chapter 5.1.3 --- Message Tampering --- p.56 / Chapter 5.1.4 --- Replaying --- p.56 / Chapter 5.2 --- Aspects of A Secure Internet Payment System --- p.57 / Chapter 5.2.1 --- Authentication --- p.57 / Chapter 5.2.2 --- Confidentiality --- p.57 / Chapter 5.2.3 --- Integrity --- p.58 / Chapter 5.2.4 --- Non-Repudiation --- p.58 / Chapter 5.3 --- Our System Security --- p.58 / Chapter 5.4 --- TravelNet Application Security --- p.61 / Chapter 6 --- Discussion of Performance Evaluation --- p.64 / Chapter 6.1 --- Performance Concerns --- p.64 / Chapter 6.2 --- Experiments Conducted --- p.65 / Chapter 6.2.1 --- Description --- p.65 / Chapter 6.2.2 --- Analysis on the Results --- p.65 / Chapter 6.3 --- Simulation Analysis --- p.69 / Chapter 7 --- Conclusion & Future Work --- p.72 / Chapter A --- Experiment Specification --- p.74 / Chapter A.1 --- Configuration --- p.74 / Chapter A.2 --- Experiment Results --- p.74 / Chapter B --- Simulation Specification --- p.77 / Chapter B.1 --- Parameter Listing --- p.77 / Chapter B.2 --- Simulation Results --- p.77 / Bibliography --- p.80
50

DNS traffic based classifiers for the automatic classification of botnet domains

Stalmans, Etienne Raymond January 2014 (has links)
Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.

Page generated in 0.1475 seconds