• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6520
  • 1925
  • 919
  • 814
  • 686
  • 370
  • 179
  • 160
  • 154
  • 105
  • 93
  • 81
  • 79
  • 76
  • 76
  • Tagged with
  • 14757
  • 2960
  • 2019
  • 1869
  • 1439
  • 1373
  • 1340
  • 1298
  • 1255
  • 1155
  • 1151
  • 1141
  • 1073
  • 1029
  • 944
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Challenges, collaborative interactions, and diagnosis performed by IT security practitioners : an empirical study

Werlinger, Rodrigo 11 1900 (has links)
This thesis investigates four different aspects of information security management: challenges faced by security practitioners, interactive collaborations among security practitioners and other stakeholders, diagnostic work performed by security practitioners during the response to incidents, and factors that impact the adoption of an intrusion detection system in one organization. Our approach is based on qualitative analyzes of empirical data from semi-structured interviews and participatory observation. For each theme under study, the contributions of the qualitative analysis are twofold. First, we provide a richer understanding of the main factors that affect the security within organizations. Second, equipped with this richer understanding, we provide recommendations on how to improve security tools, along with opportunities for future research. Our findings contribute to the understanding of the human, organizational, and technological factors that affect security in organizations and the effectiveness of security tools. Our work also highlights the need for continued refinement of how factors interplay by obtaining more rich data (e.g., contextual inquiry), and the need to generalize and validate these findings through other sources of information to study how these factors interplay (e.g., surveys).
372

Securing Distributed Context Exchange Networks in Mobile Environments

Kafle, Sijan January 2013 (has links)
The use of the internet has exploded with many more context aware applications for different services, simplifying the use and access for human requirements. The present situation shows an increasing number of devices from computers to sensors and actuators that are connected to the internet. MediaSense[1] is a growing framework that provides a platform for applications to connect these devices (smart-phones, sensors, actuators etc.) and provide services using the internet. Applications based on the MediaSense[1] framework access globally available sensors to provide contextual information with regards to a situation and provide better services. However, information flowing between the devices or sensors are open to the internet without any security. Thus, the focus of this thesis is on the development of a security mechanism for the MediaSense framework which is a fully distributed network. The task involves analyzing the security measures with and without centralized authority as well as the advantages and disadvantages of both scenarios regarding the MediaSense framework and proposing appropriate solutions to achieve the maximum possible security on the framework. The first challenge of this thesis is to identify different properties required for a security mechanism which is capable of secure key distribution and secure peer to peer communication among MediaSense instances without having any centralized authority. Therefore, this thesis proposes a resilient solution, namely, a security architecture for MediaSense which is capable of performing in a distributed environment with the ability of key distribution and management and thus securing the communication using different encryption techniques. The next challenge to the security architecture is to store the keys securely and prevent any unauthorized access from a third party. The thesis proposes the use of built in Java Application Programming Interface(API), “KeyStore” to store valuable keys locally. Thus, by addressing these challenges and other related issues, this thesis forms a security architecture or mechanism that is adaptive to a distributed system and utilizes encryption algorithms, key distribution and secure storage of the keys. To support these proposals, this thesis has developed a proof of concept application and prototypes to verify the approach. In addition, the thesis has implemented security features from the security architecture as an extension to the MediaSense framework. Hence, in conclusion, this thesis proposal regarding a security architecture for MediaSense has the ability to provide the necessary security together with the required key distribution mechanism to the framework without any centralized authority.
373

A defense framework for flooding-based DDoS attacks

You, Yonghua 29 August 2007 (has links)
Distributed denial of service (DDoS) attacks are widely regarded as a major threat to the Internet. A flooding-based DDoS attack is a very common way to attack a victim machine by sending a large number of malicious traffic. In this thesis, we propose a distance-based distributed DDoS defense framework which defends against attacks by coordinating between the distance-based DDoS defense systems of the source ends and the victim end. The proposed defense system has three major components: detection, traceback, and response. In the detection component, two distance-based detection techniques are employed. First, a distance-based technique is used to detect attacks based on a distance statistical model. Second, a statistical traffic rate forecasting technique is applied to identify attack traffic within the traffic, that are separated based on distance to the victim-end network. For the traceback component, the existing Fast Internet Traceback (FIT) technique is employed to find remote edge routers which forward attack traffic to the victim. In the response component, the distance-based rate limit mechanism quickly lowers attack traffic by setting up rate limits on these routers. We evaluate the distance-based DDoS defense system on a network simulation platform called NS2. The results demonstrate that both detection techniques are capable of detecting flooding-based DDoS attacks, and the defense system can effectively control attack traffic to sustain quality of service for legitimate users. Moreover, the system shows better performance in defeating flooding-based DDoS attacks compared to the pushback technique which uses a local aggregate congestion control mechanism. / Thesis (Master, Computing) -- Queen's University, 2007-08-22 23:01:20.581
374

An Empirical Study of a Language - based Security Testing Technique

Aboelfotoh, Muhammad 27 September 2008 (has links)
Application layer protocols have become sophisticated to the level that they have become languages in their own right. Security testing of network applications is indisputably an essential task that must be carried out prior to the release of software to the market. Since factors such as time-to-market constraints limit the scope or depth of the testing performed, it is difficult to carry out exhaustive testing prior to the release of the software. As a consequence, flaws may be left undiscovered by the software vendor, which may be discovered by those of malicious intent. We report the results of an empirical study of testing the Distributed Relational Database Architecture (DRDA®) protocol as implemented by the IBM® DB2® Database for Linux®, Unix®, and Windows® product, using a security testing approach, and a framework which implements that approach, that emerged from the joint work of the Royal Military College of Canada and Queen's University of Kingston. The previous version of the framework was used in the past to test the implementations of several network protocols. Compared to DRDA, these protocols are relatively simple, as they possess a much fewer number of structure types, messages and rules. From our study of the DRDA protocol, several omissions in the framework were uncovered, and were implemented as part of this work. In addition, the framework was automated, a preliminary automated test planner was created and a primitive language was created to provide the ability to describe custom-made test plans. Testing revealed two faults in the DB2 server, one of which was unknown to the vendor, prior to the testing that was carried out as part of this thesis work. / Thesis (Master, Computing) -- Queen's University, 2008-09-26 16:31:32.565
375

Protecting Browser Extensions from JavaScript Injection Attacks with Runtime Protection and Static Analysis

Barua, Anton 01 October 2012 (has links)
With the rapid proliferation of the internet, web browsers have evolved from single-purpose remote document viewers into multifaceted systems for executing dynamic, interactive web applications. In order to enhance the web browsing experience of users and to facilitate on-demand customizability, most web browsers now can be fitted with extensions: pieces of software that utilize the underlying web platform of a browser and provide a wide range of features such as advertisement blocking, safety ratings of websites, in-browser web development, and many more. Extensible web browsers provide access to their powerful privileged components in order to facilitate the development of feature-rich extensions. This exposure comes at a price, though, as a vulnerable extension can introduce a security hole through which an attacker can access the privileged components and penetrate a victim user’s browser, steal the user’s sensitive information, and even execute arbitrary code in the user’s computer. The current browser security model is inadequate for preventing attacks via such vulnerable extensions. Therefore, an effective protection mechanism is required that would provide web browsers adequate security while still allowing them to be extended. In this thesis, we propose a runtime protection mechanism for JavaScript-based browser extensions. Our protection mechanism performs offline randomization of an extension’s source code and augments the corresponding browser with appropriate modifications. The protection from malicious injection attacks is enforced at runtime by distinguishing attack code from the randomized extension code. Furthermore, for maximum backward compatibility with existing extensions, we propose a complementary static points-to analysis technique that can be invoked on-demand for assessing the security of dynamic code generation functions present in the source code of extensions. Our combined approach of runtime protection and static analysis is independent of the existing extension platforms, thus obviating the need of radically changing the platforms and requiring developers to rewrite their extensions. We implement our protection mechanism in the popular Mozilla Firefox browser and evaluate our approach on a set of vulnerable and non-vulnerable Mozilla Firefox extensions. The evaluation results indicate that our approach can be a viable solution for preventing attacks on JavaScript-based browser extensions while incurring negligible performance overhead and maintaining backward compatibility with existing extensions. / Thesis (Master, Computing) -- Queen's University, 2012-09-27 23:41:46.455
376

Spaces and geographers of the 'Smart Border" : technologies and discourses of Canada's post 911 borders

Gordon, Aaron Andrew. January 2006 (has links)
This study investigates Canada's border security policy, practices and technologies and the discourses in which they function, to better understand the U.S-Canadian "Smart Border" and the post-9/11 geographies of the nation-state. With the erasure of economic and military borders and the erection of new security-oriented police borders, Canada's "Smart Border" is no longer at the edges of territory but is a series of spaces reproduced in and outside of Canada through technologies such as the passport, immigration and anti-terrorism legislation, security agencies, monuments, and maps. The "Smart Border" perpetuates colonial distinctions and projects as a site of tension between the national construction of Canadian identities, policing technologies and the enforcement of a global apartheid that restricts access to political and economic resources by enforcing a regime of differential access to mobility. As a site of resistance, the "Smart Border" is also a space from which to displace colonial-national genealogies.
377

An In-memory Database for Prototyping Anomaly Detection Algorithms at Gigabit Speeds

Friesen, Travis 11 September 2013 (has links)
The growing speeds of computer networks are pushing the ability of anomaly detection algorithms and related systems to their limit. This thesis discusses the design of the Object Database, ODB, an analysis framework for evaluating anomaly detection algorithms in real time at gigabit or better speeds. To accomplish this, the document also discusses the construction a new dataset with known anomalies for verification purposes. Lastly, demonstrating the efficacy of the system required the implementation of an existing algorithm on the evaluation system and the demonstration that while the system is suitable for the evaluation of anomaly detection algorithms, this particular anomaly detection algorithm was deemed not appropriate for use at the packet-data level.
378

Multi-agent malicious behaviour detection

Wegner, Ryan 24 October 2012 (has links)
This research presents a novel technique termed Multi-Agent Malicious Behaviour Detection. The goal of Multi-Agent Malicious Behaviour Detection is to provide infrastructure to allow for the detection and observation of malicious multi-agent systems in computer network environments. This research explores combinations of machine learning techniques and fuses them with a multi-agent approach to malicious behaviour detection that effectively blends human expertise from network defenders with modern artificial intelligence. Success of the approach depends on the Multi-Agent Malicious Behaviour Detection system's capability to adapt to evolving malicious multi-agent system communications, even as the malicious software agents in network environments vary in their degree of autonomy and intelligence. This thesis research involves the design of this framework, its implementation into a working tool, and its evaluation using network data generated by an enterprise class network appliance to simulate both a standard educational network and an educational network containing malware traffic.
379

Non-monotonic trust management for distributed systems

Dong, Changyu January 2009 (has links)
No description available.
380

Creating Usable Policies for Stronger Passwords with MTurk

Shay, Richard 01 February 2015 (has links)
People are living increasingly large swaths of their lives through their online accounts. These accounts are brimming with sensitive data, and they are often protected only by a text password. Attackers can break into service providers and steal the hashed password files that store users’ passwords. This lets attackers make a large number of guesses to crack users’ passwords. The stronger a password is, the more difficult it is for an attacker to guess. Many service providers have implemented password-composition policies. These policies constrain or restrict passwords in order to prevent users from creating easily guessed passwords. Too lenient a policy may permit easily cracked passwords, and too strict a policy may encumber users. The ideal password-composition policy balances security and usability. Prior to the work in this thesis, many password-composition policies were based on heuristics and speculation, rather than scientific analysis. Passwords research often examined passwords constructed under a single uniform policy, or constructed under unknown policies. In this thesis, I contrast the strength and usability of passwords created under different policies. I do this through online, crowdsourced human-subjects studies with randomized, controlled password-composition policies. This result is a scientific comparison of how different password-composition policies affect both password strength and usability. I studied a range of policies, including those similar to policies found in the wild, policies that trade usability for security by requiring longer passwords, and policies in which passwords are system-assigned with known security. One contribution of this thesis is a tested methodology for collecting passwords under different policies. Another contribution is the comparison between password policies. I find that some password-composition policies make more favorable tradeoffs between security and usability, allowing evidence-based recommendations for service providers. I also offer insights for researchers interested in conducting larger-scale online studies, having collected data from tens of thousands of participants.

Page generated in 0.0507 seconds