• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 19
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Search Rank Fraud Prevention in Online Systems

Rahman, Md Mizanur 31 October 2018 (has links)
The survival of products in online services such as Google Play, Yelp, Facebook and Amazon, is contingent on their search rank. This, along with the social impact of such services, has also turned them into a lucrative medium for fraudulently influencing public opinion. Motivated by the need to aggressively promote products, communities that specialize in social network fraud (e.g., fake opinions and reviews, likes, followers, app installs) have emerged, to create a black market for fraudulent search optimization. Fraudulent product developers exploit these communities to hire teams of workers willing and able to commit fraud collectively, emulating realistic, spontaneous activities from unrelated people. We call this behavior “search rank fraud”. In this dissertation, we argue that fraud needs to be proactively discouraged and prevented, instead of only reactively detected and filtered. We introduce two novel approaches to discourage search rank fraud in online systems. First, we detect fraud in real-time, when it is posted, and impose resource consuming penalties on the devices that post activities. We introduce and leverage several novel concepts that include (i) stateless, verifiable computational puzzles that impose minimal performance overhead, but enable the efficient verification of their authenticity, (ii) a real-time, graph based solution to assign fraud scores to user activities, and (iii) mechanisms to dynamically adjust puzzle difficulty levels based on fraud scores and the computational capabilities of devices. In a second approach, we introduce the problem of fraud de-anonymization: reveal the crowdsourcing site accounts of the people who post large amounts of fraud, thus their bank accounts, and provide compelling evidence of fraud to the users of products that they promote. We investigate the ability of our solutions to ensure that fraud does not pay off.
22

Study of the techniques used by OWASP ZAP for analysis of vulnerabilities in web applications / En studie av de tekniker OWASP ZAP använder för att analysera sårbarheter i webbapplikationer

Jakobsson, Adam, Häggström, Isak January 2022 (has links)
Today, new web applications are made every single day with increasingly more sensitive data to manage. To ensure that no security vulnerabilities such as data leakage in web applications exist, developers are using tools such as a web vulnerability scanner. This type of tool can detect vulnerabilities by automatically finding input fields where data can be injected and performing different attacks on these fields. One of the most common web vulnerability scanners is OWASP ZAP. Web vulnerability scanners were first developed during a time when traditional multi-page applications were prominent. Nowadays, when modern single-page applications have become the de facto standard, new challenges for web vulnerability scanners have arisen. These problems include identifying dynamically updated web pages. This thesis aims to evaluate the techniques used by OWASP ZAP and several other web vulnerability scanners for identifying two of the most common vulnerabilities, SQL injections and cross-site scripting. This issue is approached by testing the selected web vulnerability scanners on deliberately vulnerable web applications, to assess the performance and techniques used, and to determine if the performance of OWASP ZAP could be improved. If an identified technique in another web vulnerability scanner performed better than the counterpart in OWASP ZAP, it will be implemented in OWASP ZAP and evaluated. From the tests performed, it could be concluded that the performance of OWASP ZAP was lacking in the search for input fields, where a depth-first search algorithm was used. The breadth-first search algorithm used by other scanners was shown to be more effective in specific cases and was therefore implemented in OWASP ZAP. The result shows that the use case for the two algorithms differs between web applications and by using both of the algorithms to find vulnerabilities, better performance is achieved.
23

Detecting Server-Side Web Applications with Unrestricted File Upload Vulnerabilities

Huang, Jin 01 September 2021 (has links)
No description available.
24

Exploring the Evolution of the TLS Certificate Ecosystem

Farhan, Syed Muhammad 01 June 2022 (has links)
A vast majority of popular communication protocols for the internet employ the use of TLS (Transport Layer Security) to secure communication. As a result, there have been numerous efforts including the introduction of Certificate Transparency logs and Free Automated CAs to improve the SSL certificate ecosystem. Our work highlights the effectiveness of these efforts using the Certificate Transparency dataset as well as certificates collected via full IPv4 scans. We show that a large proportion of invalid certificates still exists and outline reasons why these certificates are invalid and where they are hosted. Moreover, we show that the incorrect use of template certificates has led to incorrect SCTs being embedded in the certificates. Taken together, our results emphasize continued involvement for the research community to improve the web's PKI ecosystem. / Master of Science / Security and Privacy for communication over the internet is increasingly important. TLS (Transport Layer Security) is the most popular protocol used to secure communications over the internet today. This work explores how this protocol has evolved over the past 9 years and how effective the measures undertaken by the community have been to improve the adherence to best practices in the wild. TLS employs the use of certificates to initialize secure communication and make sure the other party is indeed who they say they are. We show that while security has improved over the years, a majority of certificates are invalid and outline reasons why. We also observe the growth of Certificate Transparency logs and show how the use of template certificates cause unexpected issues. Taken together, our results emphasize a continued involvement for the research community to improve the TLS certificate ecosystem.
25

From Theory to Practice: Deployment-grade Tools and Methodologies for Software Security

Rahaman, Sazzadur 25 August 2020 (has links)
Following proper guidelines and recommendations are crucial in software security, which is mostly obstructed by accidental human errors. Automatic screening tools have great potentials to reduce the gap between the theory and the practice. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. The main technical enabler for CryptoGuard is a set of detection algorithms that refine program slices by leveraging language-specific insights, where TaintCrypt relies on symbolic execution-based path-sensitive analysis to reduce false positives. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host, manage, and maintain the PCI certification testbeds. / Doctor of Philosophy / Automatic screening tools have great potentials to reduce the gap between the theory and the practice of software security. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host the PCI certification testbeds.
26

Biometric authentication systems for secured e-transactions in Saudi Arabia : an empirical investigation of the factors affecting users' acceptance of fingerprint authentication systems to improve online security for e-commerce and e-government websites in Saudi Arabia

Al-Harby, Fahad Mohammed January 2010 (has links)
Security is becoming an increasingly important issue for business, and with it comes the need for appropriate authentication; consequently, it is becoming gradually more important to develop secure e-commerce systems. Fraud via the web, identity theft, and phishing are raising concerns for users and financial organisations. In addition, current authentication methods, like passwords, have many problems (e.g. some users write them down, they forget them, or they make them easy to hack). We can overcome these drawbacks by using biometric authentication systems. Biometric systems are being used for personal authentication in response to the rising issue of authentication and security. Biometrics provide much promise, in terms of preserving our identities without the inconvenience of carrying ID cards and/or remembering passwords. This research is important because the securing of e-commerce transactions is becoming increasingly important. Identity theft, hacking and viruses are growing threats to Internet users. As more people use the Internet, more identity theft cases are being reported. This could harm not only the users, but also the reputation of the organisations whose names are used in these illegal acts. For example, in the UK, online banking fraud doubled in 2008 compared to 2007. More users took to e-shopping and online banking, but failed to take necessary protection. For non-western cultures, the figures for web security, in 2008, illustrated that Saudi Arabia was ranked ninth worldwide for users who had been attacked over the web. The above statistics reflect the significance of information security with e-commerce systems. As with any new technology, user acceptance of the new technology is often hard to measure. In this thesis, a study of user acceptance of biometric authentication systems in e-transactions, such as online banking, within Saudi society was conducted. It examined whether Saudis are practically willing to accept this technology. This thesis focuses upon Saudi Arabia, which has developing economy. It has achieved a rapid rate of growth, and therefore makes an interesting and unique case study. From an economist's point of view, Saudi Arabia is the powerhouse of the Middle East. It has the leading regional economy, and, even though it is still relatively young. It has a young and rapid growing population; therefore, this makes Saudi Arabia an attractive potential market for all kinds of e-commerce applications. Having said that, with more than half of population under the age of 30 are more to be expected to take the risk of accepting new technology. For this work, 306 Saudi participants were involved in the experiments. A laboratory experiment was created that actively tested a biometric authentication system in combination with a survey. The Technology Acceptance Model (TAM) was adopted in the first experimental phase as the theoretical basis on which to develop the iv research framework, the model has proven its efficiency as a good predictor for the biometric authentication system. Furthermore, in a second experimental phase, the Unified Theory of Acceptance and Use of Technology (UTAUT) with moderating variables such as age, gender and education level was examined as a proposed conceptual framework to overcome the limitations of TAM. The aim of the study was to explore factors affecting users' acceptance of biometric authentication systems. The findings from Structural Equation Modelling (SEM) analysis indicate that education level is a significant moderating factor, while gender and age do not record as significant. This thesis added new knowledge to this field and highlighted the importance of the perceptions of users regarding biometric security technologies. It helps determine the factors affecting the acceptance of biometric technology. To our knowledge, this is the first systematic study of this issue carried out by academic and non-biased researchers in Saudi Arabia. Furthermore, the thesis presents security technology companies and developers of information security products with information to help in the determination of what is significant to their user base when taking into account the introduction of new secure systems and products.
27

End-to-End Security of Information Flow in Web-based Applications

Singaravelu, Lenin 25 June 2007 (has links)
Web-based applications and services are increasingly being used in security-sensitive tasks. Current security protocols rely on two crucial assumptions to protect the confidentiality and integrity of information: First, they assume that end-point software used to handle security-sensitive information is free from vulnerabilities. Secondly, these protocols assume point-to-point communication between a client and a service provider. However, these assumptions do not hold true with large and complex vulnerable end point software such as the Internet browser or web services middleware or in web service compositions where there can be multiple value-adding service providers interposed between a client and the original service provider. To address the problem of large and complex end-point software, we present the AppCore approach which uses manual analysis of information flow, as opposed to purely automated approaches, to split existing software into two parts: a simplified trusted part that handles security-sensitive information and a legacy, untrusted part that handles non-sensitive information without access to sensitive information. Not only does this approach avoid many common and well-known vulnerabilities in the legacy software that compromised sensitive information, it also greatly reduces the size and complexity of the trusted code, thereby making exhaustive testing or formal analysis more feasible. We demonstrate the feasibility of the AppCore approach by constructing AppCores for two real-world applications: a client-side AppCore for https-based applications and an AppCore for web service platforms. Our evaluation shows that security improvements and complexity reductions (over a factor of five) can be attained with minimal modifications to existing software (a few tens of lines of code, and proxy settings of a browser) and an acceptable performance overhead (a few percent). To protect the communication of sensitive information between the clients and service providers in web service compositions, we present an end-to-end security framework called WS-FESec that provides end-to-end security properties even in the presence of misbehaving intermediate services. We show that WS-FESec is flexible enough to support the lattice model of secure information flow and it guarantees precise security properties for each component service at a modest cost of a few milliseconds per signature or encrypted field.
28

Practical authentication in large-scale internet applications

Dacosta, Italo 03 July 2012 (has links)
Due to their massive user base and request load, large-scale Internet applications have mainly focused on goals such as performance and scalability. As a result, many of these applications rely on weaker but more efficient and simpler authentication mechanisms. However, as recent incidents have demonstrated, powerful adversaries are exploiting the weaknesses in such mechanisms. While more robust authentication mechanisms exist, most of them fail to address the scale and security needs of these large-scale systems. In this dissertation we demonstrate that by taking into account the specific requirements and threat model of large-scale Internet applications, we can design authentication protocols for such applications that are not only more robust but also have low impact on performance, scalability and existing infrastructure. In particular, we show that there is no inherent conflict between stronger authentication and other system goals. For this purpose, we have designed, implemented and experimentally evaluated three robust authentication protocols: Proxychain, for SIP-based VoIP authentication; One-Time Cookies (OTC), for Web session authentication; and Direct Validation of SSL/TLS Certificates (DVCert), for server-side SSL/TLS authentication. These protocols not only offer better security guarantees, but they also have low performance overheads and do not require additional infrastructure. In so doing, we provide robust and practical authentication mechanisms that can improve the overall security of large-scale VoIP and Web applications.
29

Webbsystem säkerhet : Ur ett API och webbapplikations perspektiv

Månsson, Anton January 2017 (has links)
Web applications and APIs have become more popular every year, and security risks haveincreased. Along with more security risks and the large amount of sensitive informationshared on web applications today, the problem grows. I therefore wanted to explore morein security deficiencies to increase my own knowledge and others in the field. To do that,a web application was developed and a survey was made of what security threats existtoday and what solutions they have. Some of the solutions encountered during theinvestigation were then implemented and tested in the web application. The result showedsome general solutions such as validation, which was a solution to a number of threats.The investigation also showed that security is not black and white and that it is possibleto implement actions but attackers can still find ways to attack systems.
30

Security als komplexe Anforderung an agile Softwareentwicklung: Erarbeitung eines Anwendungsmusters zur Betrachtung der IT-Security in agilen Entwickungszyklen anhand eines metadatengestützen Testing-Verfahrens

Matkowitz, Max 26 April 2022 (has links)
Agile Softwareentwicklung steht mit seinen Prinzipien für offene Kollaboration, leichtgewichtige Rahmenwerke und schnelle Anpassung an Änderungen. Mit diesen Charakteristika konnte sich Problemen und Unzufriedenheit in der traditionellen Software-Entwicklung gewidmet werden. Auf der Seite der IT-Sicherheit haben sich allerdings vielfältige Herausforderungen offenbart. Mit Static Application Security Testing (SAST) und Dynamic Application Security Testing (DAST) wurden erste Lösungsansätze dafür geliefert. Eine zufriedenstellende Möglichkeit zur Integration von Security-Testing in agile Softwareentwicklung, insbesondere im Cloud-Kontext, stellen diese allerdings nicht dar. Die vorliegende Arbeit soll unter folgender Fragestellung bearbeitet werden: Wie kann ein praktisches Konzept zur Betrachtung der Sicherheit von Anwendungs-Code, Container und Cluster innerhalb von agilen Entwicklungszyklen realisiert werden, wenn ein metadatenbasiertes Testverfahren verwendet werden soll? Das Ziel teilt sich damit in die Konzeption und Realisierung von zwei Aspekten: das metadatenbasierte Security-Testing von Code/Container/Cluster und den Entwicklungsablauf zur Anwendung des Testing-Verfahrens. Ein Fallbeispiel der Webentwicklung wurde zur qualitativen Evaluation eines Prototypen herangezogen, welcher mittels Python und GitLab umgesetzt wurde. Nach Erläuterung der Rahmenbedingungen, konnten konkrete Szenarien eines Entwicklungsprozesses durchlaufen werden. Die qualitative Untersuchung zeigte eine erfolgreiche Erkennung von Schwachstellen unterschiedlicher Kategorien (z.B. Broken Access Control). Insgesamt konnte eine gute Einbettung in den beispielhaften Entwicklungsablauf beobachtet werden. Der Aufwand für die Pflege der Metadaten ist nicht zu vernachlässigen, jedoch sollte dieser aufgrund der Orientierung am etablierten OpenAPI Schema nicht zu stark gewichtet werden. Dies gilt insbesondere dann, wenn durch den Einfluss von Metadaten Mehrwerte (Durchführbarkeit, Schnelligkeit, Komfortabilität) generiert werden können.:1 Einleitung 1.1 Problembeschreibung 1.2 Zielstellung 1.3 Stand der Technik und Entwicklungsmethoden 1.4 Methodik 2 Theoretische und Technische Grundlagen 2.1 Grundlagen der agilen Software-Entwicklung 2.2 GitLab 2.3 Grundlagen zum metadatengestützten Security-Testing 3 Konzeption 3.1 Low-Level Modell (Testablauf) 3.2 Synthese der beispielhaften Testfälle 3.3 Beschreibungsdatei 3.4 High-Level Modell (Entwicklungsablauf) 4 Implementation 4.1 Testablauf 4.2 CI/CD Pipeline 4.3 Fallbeispiel der agilen Softwareentwicklung 5 Auswertung und Ausblick

Page generated in 0.0648 seconds