• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 22
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Building data-centric security mechanisms for web applications

Mundada, Yogesh 27 May 2016 (has links)
Data loss from web applications at different points of compromise has become a major liability in recent years. Existing security guidelines, policies, and tools fail often, ostensibly for reasons stemming from blatant disregard of common practice to subtle exploits originating from complex interactions between components. Current security mechanisms focus on “how to stop illicit data transfer”(i.e., the “syntax”), and many tools achieve that goal in principle. Yet, the practice of securing data additionally depends on allowing administrators to clearly specify “what data should be secured” (i.e., the “semantics”). Currently, translation from “security semantics” to “security syntax” is manual, time­consuming, and ad hoc. Even a slight oversight in the translation process could render the entire system insecure. Security semantics frequently need modifications due to changes in various external factors such as policy changes, user reclassification, and even code refactoring. This dissertation hypothesizes that adaptation to such changes would be faster and less error prone if the tools also focused on automating translation from semantics to syntax, in addition to simply executing the syntax. With this approach, we build following low ­maintenance security tools that prevent unauthorized sensitive data transfer at various vantage points in the World Wide Web ecosystem. We show how the security tools can take advantage of inherent properties of the sensitive information in each case, making the translation process automatic and faster: ● Appu, a tool that automatically finds personal information(semantics) spread across web services, and suggests actions(syntax) to minimize data loss risks. ● Newton, a tool that formalizes the access control model using web cookies. Using this formal approach, it improves the security of the existing session management techniques by detecting(semantics) and protecting(syntax) privileged cookies without requiring input from the site administrator. ● SilverLine, a system for cloud­based web services that automatically derives data exfiltration rules(syntax) from the information about sensitive database tables & inter­table relationships(semantics). Then, it executes these rules using information flow control mechanism.
2

Analysis of Evasion Techniques in Web-based Malware

Lu, Gen January 2013 (has links)
Web-based mechanisms, often mediated by malicious JavaScript code, play an important role in malware delivery today, making defenses against web-based malware crucial for system security. To make it even more challenging, malware authors often take advantage of various evasion techniques to evade detection. As a result, a constant arms race of evasion and detection techniques between malware authors and security analysts has led to advancement in code obfuscation and anti-analysis techniques. This dissertation focuses on the defenses against web-based malware protected by advanced evasion techniques from both defensive and offensive perspectives. From a defensive perspective, we examine existing evasion techniques and propose deobfuscation and detection approaches to defeating some popular techniques used by web-based malware today. In the case of code-unfolding based obfuscation, we use a semantics-based approach to simplify away obfuscations by identifying code that is relevant to the behavior of the original program. In the case of environment-dependent malware, we propose environmental predicate, which detects behavior discrepancy of JavaScript program between targeted browser and detector sandbox, therefore protecting users from possible detection false negatives caused by environmental triggers. From an offensive perspective, we analyze existing detection techniques to examining their assumptions and study how these assumptions can be broken. We also propose a combination of obfuscation and anti-analysis techniques, targeting these limitations, which can hide existing web-based malware from state-of-the-art detectors.
3

An information security perspective on XML web services.

Chetty, Jacqueline 29 May 2008 (has links)
The Internet has come a long way from its humble beginnings of being used as a simple way of transporting data within the US army and other academic organizations. With the exploding growth of the Internet and the World Wide Web or WWW more and more people and companies are not only providing services via the WWW but are also conducting business transactions. In today’s Web-based environment where individuals and organizations are conducting business online, it is imperative that the technologies that are being utilized are secure in every way. It is important that any individual or organization that wants to protect their data in one form or another adhere to the five (5) basic security services. These security services are Identification and Authentication, Authorization, Confidentiality, Integrity and Non-repudiation This study looks at two Web-based technologies, namely XML and XML Web services and provides an evaluation of whether or not the 5 security services form part of the security surrounding these Web-based technologies. Part 1 is divided into three chapters. Chapter 1, is an Introduction and roadmap to the dissertation. This chapter provides an introduction to the dissertation. Chapter 2 provides an Overview of XML. The reader must not view this chapter as a technical chapter. It is simply a chapter that provides the reader with an understanding of XML so that the reader is able to understand the chapter surrounding XML security. Chapter 3 provides an Overview of Web services. Again the reader must not view this chapter as a technical chapter and as in chapter 2 this chapter must be seen as an overview providing the reader with a broad picture of what Web services is. A lot of technical background and know how has not been included in these two chapters. Part 2 is divided into a further three chapters. Chapter 4 is titled Computer Security and provides the reader with a basic understanding surrounding security in general. The 5 security services are introduced in more detail and the important mechanisms and aspects surrounding security are explained. Chapter 5 looks at how XML and Web services are integrated. This is a short chapter with diagrams that illustrate how closely XML and Web services are interwoven. Chapter 6 is the most important chapter of the dissertation. This chapter is titled XML and Web services security. This chapter provides the reader with an understanding of the various XML mechanisms that form part of the Web services environment, thus providing security in the form of the 5 security services. Each XML mechanism is discussed and each security service is discussed in relation to these various mechanisms. This is all within the context of the Web services environment. The chapter concludes with a table that summarizes each security service along with its corresponding XML mechanism. Part 3 includes one chapter. Chapter 7 is titled Mapping XML and Web services against the 5 security services. This chapter makes use of the information from the previous chapter and provides a summary in the form of a table. This table identifies each security service and looks at the mechanisms that provide that service within a Web services environment. Part 4 provides a conclusion to the dissertation. Chapter 8 is titled Conclusion and provides a summary of each preceding chapter. This chapter also provides a conclusion and answers the question of whether or not the 5 information security services are integrated into XML and Web services. / von Solms, S.H., Prof.
4

Learning-based Cyber Security Analysis and Binary Customization for Security

Tian, Ke 13 September 2018 (has links)
This thesis presents machine-learning based malware detection and post-detection rewriting techniques for mobile and web security problems. In mobile malware detection, we focus on detecting repackaged mobile malware. We design and demonstrate an Android repackaged malware detection technique based on code heterogeneity analysis. In post-detection rewriting, we aim at enhancing app security with bytecode rewriting. We describe how flow- and sink-based risk prioritization improves the rewriting scalability. We build an interface prototype with natural language processing, in order to customize apps according to natural language inputs. In web malware detection for Iframe injection, we present a tag-level detection system that aims to detect the injection of malicious Iframes for both online and offline cases. Our system detects malicious iframe by combining selective multi-execution and machine learning algorithms. We design multiple contextual features, considering Iframe style, destination and context properties. / Ph. D. / Our computing systems are vulnerable to different kinds of attacks. Cyber security analysis has been a problem ever since the appearance of telecommunication and electronic computers. In the recent years, researchers have developed various tools to protect the confidentiality, integrity, and availability of data and programs. However, new challenges are emerging as for the mobile security and web security. Mobile malware is on the rise and threatens both data and system integrity in Android. Furthermore, web-based iframe attack is also extensively used by web hackers to distribute malicious content after compromising vulnerable sites. This thesis presents on malware detection and post-detection rewriting for both mobile and web security. In mobile malware detection, we focus on detecting repackaged mobile malware. We propose a new Android repackaged malware detection technique based on code heterogeneity analysis. In post-detection rewriting, we aim at enhancing app security with bytecode rewriting. Our rewriting is based on the flow and sink risk prioritization. To increase the feasibility of rewriting, our work showcases a new application of app customization with a more friendly user interface. In web malware detection for Iframe injection, we developed a tag-level detection system which aims to detect injection of malicious Iframes for both online and offline cases. Our system detects malicious iframe by combining selective multi-execution and machine learning. We design multiple contextual features, considering Iframe style, destination and context properties.
5

Measuring and Understanding TTL Violations in DNS Resolvers

Bhowmick, Protick 02 January 2024 (has links)
The Domain Name System (DNS) is a scalable-distributed caching architecture where each DNS records are cached around several DNS servers distributed globally. DNS records include a time-to-live (TTL) value that dictates how long the record can be stored before it's evicted from the cache. TTL holds significant importance in aspects of DNS security, such as determining the caching period for DNSSEC-signed responses, as well as performance, like the responsiveness of CDN-managed domains. On a high level, TTL is crucial for ensuring efficient caching, load distribution, and network security in Domain Name System. Setting appropriate TTL values is a key aspect of DNS administration to ensure the reliable and efficient functioning of the Domain Name System. Therefore, it is crucial to measure how TTL violations occur in resolvers. But, assessing how DNS resolvers worldwide handle TTL is not easy and typically requires access to multiple nodes distributed globally. In this work, we introduce a novel methodology for measuring TTL violations in DNS resolvers leveraging a residential proxy service called Brightdata, enabling us to evaluate more than 27,000 resolvers across 9,500 Autonomous Systems (ASes). We found that 8.74% arbitrarily extends TTL among 8,524 resolvers that had atleast five distinct exit nodes. Additionally, we also find that the DNSSEC standard is being disregarded by 44.1% of DNSSEC-validating resolvers, as they continue to provide DNSSEC-signed responses even after the RRSIGs have expired. / Master of Science / The Domain Name System (DNS) works as a global phonebook for the internet, helping your computer find websites by translating human-readable names into numerical IP addresses. This system uses a smart caching system spread across various servers worldwide to store DNS records. Each record comes with a time-to-live (TTL) value, essentially a timer that decides how long the information should stay in the cache before being replaced. TTL is crucial for both security and performance in the DNS world. It plays a role in securing responses and determines the responsiveness of load balancing schemes employed at Content Delivery Networks (CDNs). In simple terms, TTL ensures efficient caching, even network load, and overall security in the Domain Name System. For DNS to work smoothly, it's important to set the right TTL values and the resolvers to strictly honor the TTL. However, figuring out how well DNS servers follow these rules globally is challenging. In this study, we introduce a new way to measure TTL violations in DNS servers using a proxy service called Brightdata. This allows us to check over 27,000 servers across 9,500 networks. Our findings reveal that 8.74% of these servers extend TTL arbitrarily. Additionally, we discovered that 44.1% of servers that should be following a security standard (DNSSEC) are not doing so properly, providing signed responses even after they are supposed to expire. This research sheds light on how DNS servers around the world extend TTL and the potential performance and security risks involved.
6

Reasons for lacking web security : An investigation into the knowledge of web developers

Sundqvist, Jonathan January 2018 (has links)
Context: With the constantly increasing activity in the internet and its giant rise over the last 18 years, it’s become increasingly important to investigate common problems in web security Objectives: This thesis is made up of a literature study and a survey. It investigates what the common problems in web security are. It also investigates what the average web developer knows, what they think about the state of web security and what they would change. Method: A survey was developed to get information about people’s education levels, previous experience with web security and security breaches. As well as to get their opinions about web security and to find out what they would change. Results: Based on the literature study and survey the thesis finds out what the common problems in web security are as well as what the average web developer knows, think about web security and want to change. Conclusions: The state of web security in 2018 is not at the level that one might expect, there are several common problems created due to lack of knowledge and the consensus of the people is the same, that the state of web security is sub-par and not to their general satisfaction.
7

Why Johnny Still Can’t Pentest: A Comparative Analysis of Open-source Black-box Web Vulnerability Scanners

Khalil, Rana Fouad 19 December 2018 (has links)
Black-box web application vulnerability scanners are automated tools that are used to crawl a web application to look for vulnerabilities. These tools are often used in one of two ways. In the first approach, scanners are used as Point-and-Shoot tools where a scanner is only given the root URL of an application and asked to scan the site. Whereas, in the second approach, scanners are first configured to maximize the crawling coverage and vulnerability detection accuracy. Although the performance of leading commercial scanners has been thoroughly studied, very little research has been done to evaluate open-source scanners. This paper presents a feature and performance evaluation of five open-source scanners. We analyze the crawling coverage, vulnerability detection accuracy, scanning speed, report- ing and usability features. The scanners are tested against two well known benchmarks: WIVET and WAVSEP. Additionally, the scanners are tested against a realistic web application called WackoPicko. The chosen benchmarks are composed of a wide range of vulnerabilities and crawling challenges. Each scanner is tested in two modes: default and configured. Lastly, the scanners are compared with the state of the art commercial scanner Burp Suite Professional. Our results show that being able to properly crawl a web application is a critical task in detecting vulnerabilities. Unfortunately, the majority of the scanners evaluated had difficulty crawling through common web technologies such as dynamically generated JavaScript content and Flash applications. We also identified several classes of vulnerabilities that are not being detected by the scanners. Furthermore, our results show that scanners displayed considerable improvement when run in configured mode.
8

Scalable Content Delivery Without a Middleman

Xu, Junbo 30 August 2017 (has links)
No description available.
9

Secure web applications against off-line password guessing attack : a two way password protocol with challenge response using arbitrary images

Lu, Zebin 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The web applications are now being used in many security oriented areas, including online shopping, e-commerce, which require the users to transmit sensitive information on the Internet. Therefore, to successfully authenticate each party of web applications is very important. A popular deployed technique for web authentication is the Hypertext Transfer Protocol Secure (HTTPS) protocol. However the protocol does not protect the careless users who connect to fraudulent websites from being trapped into tricks. For example, in a phishing attack, a web user who connects to an attacker may provide password to the attacker, who can use it afterwards to log in the target website and get the victim’s credentials. To prevent phishing attacks, the Two-Way Password Protocol (TPP) and Dynamic Two-Way Password Protocol (DTPP) are developed. However there still exist potential security threats in those protocols. For example, an attacker who makes a fake website may obtain the hash of users’ passwords, and use that information to arrange offline password guessing attacks. Based on TPP, we incorporated challenge responses with arbitrary images to prevent the off-line password guessing attacks in our new protocol, TPP with Challenge response using Arbitrary image (TPPCA). Besides TPPCA, we developed another scheme called Rain to solve the same problem by dividing shared secrets into several rounds of negotiations. We discussed various aspects of our protocols, the implementation and experimental results.
10

Protection against malicious JavaScript using hybrid flow-sensitive information flow monitoring

Sayed, Bassam 02 March 2016 (has links)
Modern web applications use several third-party JavaScript libraries to achieve higher levels of engagement. The third-party libraries range from utility libraries such as jQuery to libraries that provide services such as Google Analytics and context- sensitive advertisement. These third-party libraries have access to most (if not all) the elements of the displayed webpage. This allows malicious third-party libraries to perform attacks that steal information from the end-user or perform an action without the end-user consent. These types of attacks are the stealthiest and the hardest to defend against, because they are agnostic to the browser type and platform of the end-user and at the same time they rely on web standards when performing the attacks. Such kind of attacks can perform actions using the victim’s browser without her permission. The nature of such actions can range from posting an embarrassing message on the victim’s behalf over her social network account, to performing online biding using the victim’s account. This poses the need to develop effective mechanisms for protecting against client-side web attacks that mainly target the end-user. In the proposed research, we address the above challenges from information flow monitoring perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow in the JavaScript code and prevents information leakage from happening. The main component of the framework is a hybrid flow-sensitive security monitor that controls, at runtime, the dissemination of information flow and its inlining. The security monitor is hybrid as it combines both static analysis and runtime monitoring of the running JavaScript program. We provide the soundness proof of the model with respect to termination-insensitive non-interference security policy and develop a new security benchmark to establish experimentally its effectiveness in detecting and preventing illicit information flow. When applied to the context of client-side web-based attacks, the proposed model provides a more secure browsing environment for the end-user. / Graduate

Page generated in 0.0828 seconds