• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 27
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 75
  • 59
  • 56
  • 47
  • 40
  • 33
  • 29
  • 26
  • 22
  • 21
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

MITIGATION OF WEB-BASED PROGRAM SECURITY VULNERABILITY EXPLOITATIONS

Shahriar, HOSSAIN 30 November 2011 (has links)
Over the last few years, web-based attacks have caused significant harm to users. Many of these attacks occur through the exploitations of common security vulnerabilities in web-based programs. Given that, mitigation of these attacks is extremely crucial to reduce some of the harmful consequences. Web-based applications contain vulnerabilities that can be exploited by attackers at a client-side (browser) without the victim’s (browser user’s) knowledge. This thesis is intended to mitigate some exploitations due to the presence of security vulnerabilities in web applications while performing seemingly benign functionalities at the client-side. For example, visiting a webpage might result in JavaScript code execution (cross-site scripting), downloading a file might lead to the execution of JavaScript code (content sniffing), clicking on a hyperlink might result in sending unwanted legitimate requests to a trusted website (cross-site request forgery), and filling out a seemingly legitimate form may eventually lead to stealing of credential information (phishing). Existing web-based attack detection approaches suffer from several limitations such as (i) modification of both server and client-side environments, (ii) exchange of sensitive information between the server and client, and (iii) lack of detection of some attack types. This thesis addresses these limitations by mitigating four security vulnerabilities in web applications: cross-site scripting, content sniffing, cross-site request forgery, and phishing. We mitigate the exploitations of these vulnerabilities by developing automatic attack detection approaches at both server and client-sides. We develop server-side attack detection frameworks to detect attack symptoms within response pages before sending them to the client. The approaches are designed based on the assumption that the server-side program source is available for analysis, but we are not allowed to alter the program code and the runtime environments. Moreover, we develop client-side attack detection frameworks so that some level of protection is present when the source code of server websites (either trusted or untrusted) is not available. Our proposed solutions explore several techniques such as response page parsing and file content analysis, browser-level checking of requests and responses, and finite state machine-based behavior monitoring. The thesis evaluates the proposed attack detection approaches with real-world vulnerable programs. The evaluation results indicate that our approaches are effective and perform better than the related work. We also contribute to the development of benchmark suites for evaluating attack detection techniques. / Thesis (Ph.D, Computing) -- Queen's University, 2011-11-29 09:44:24.465
132

DNS traffic based classifiers for the automatic classification of botnet domains

Stalmans, Etienne Raymond January 2014 (has links)
Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
133

A framework for high speed lexical classification of malicious URLs

Egan, Shaun Peter January 2014 (has links)
Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
134

Mitteilungen des URZ 3/2005

Ehrig, Matthias, Grunewald, Dietmar, Pöhnitzsch, Thomas, Richter, Frank, Riedel, Wolfgang, Schmidt, Ronald, Wegener, Edwin, Ziegler, Christoph 17 August 2005 (has links)
Informationen des Universitätsrechenzentrums:Aktualisierung der Antivirensoftware an der TUC WXPADM und WXPI Tipps zur Bereitstellung von Lehrmaterialien im Campusnetz Was gibt es Neues am CLiC? Campusnetzzugang für Gäste der TUC (Gast-Logins) Phishing: Versuchter Datenklau per E-Mail Kurzinformationen
135

Awareness-Raising and Prevention Methods of Social Engineering for Businesses and Individuals

Harth, Dominik, Duernberger, Emanuel January 2022 (has links)
A system is only as secure as the weakest link in the chain. Humans are the binding link between IT (information technology) security and physical secu-rity. In general, the human is often considered as the weakest link in the chain, so social engineering attacks are used to manipulate or trick people to accom-plish the goal of bypassing security systems. Within this master thesis, we answer several research questions related to social engineering. Most im-portant is to find out why humans are considered as the weakest link and why existing guidelines are failing, as well as to achieve the goal of raising aware-ness and starting education at a young age. For this, we examine existing lit-erature on the subject and create experiments, an interview, a campaign eval-uation, and a survey. Our systematic work begins with an introduction, the methodology, a definition of social engineering and explanations of state-of-the-art social engineering methods. The theoretical part of this thesis also in-cludes ethical and psychological aspects and an evaluation of existing guide-lines with a review of why they are not successful.Furthermore, we continue with the practical part. An interview with a profes-sional security consultant focusing on social engineering from our collabora-tion company TÜV TRUST IT GmbH (TÜV AUSTRIA Group)1 is con-ducted. A significant part here deals with awareness-raising overall, espe-cially at a younger age. Additionally, the countermeasures against each dif-ferent social engineering method are analysed. Another practical part is the evaluation of existing social engineering campaigns2 from TÜV TRUST IT GmbH TÜV AUSTRIA Group to see how dangerous and effective social en-gineering has been in the past. From experience gained in this thesis, guide-lines on dealing with social engineering are discussed before the thesis is fi-nalized with results, the conclusion and possible future work.
136

Har utbildningsbakgrund någon påverkan på "Phishabilty"?

Grönberg, Alfred, Folemark, Patrik January 2021 (has links)
Phishing är en metod som används av angripare på nätet för att lura sitt offer att dela med sig av känslig information som bankuppgifter, lösenord eller användaruppgifter. Författarnas syfte med denna studie är att undersöka ifall det är skillnad på utsattheten för phishing beroende på utbildningsbakgrund. Om de med utbildningsbakgrund inom IT eventuellt presterar bättre än de utan den bakgrunden eller om det går att hitta andra samband varför vissa lättare faller offer för phishing. I takt med att system blir allt säkrare blir den mänskliga faktorn den svaga länken. För är det något som är säkert är det att människor begår misstag och gör fel. Det handlar därför om att minimera dessa risker och ständigt vara i framkant för att bemöta cyberkriminaliteten. Det är viktigt att hitta svaren varför någon faller för phishing och hur det går att stärka människors förmåga att identifiera en phishing attack innan det är försent eftersom det annars kan få negativa konsekvenser. Resultaten togs fram genom en enkät där förmågan att identifiera phishing e-mails undersöktes. Det gjordes med hjälp av ett test där respondenterna fick en verklighetstrogen bild av olika phishing metoder i form av e-mails där det skulle identifiera om e-mailen var phishing eller autentiskt. Undersökningens resultat visar att de med utbildningsbakgrund inom IT hade lättare att dissekera vilka som var phishing och vilka som var autentiska. Denna undersökning replikerade även tidigare studiers resultat att kvinnor som grupp är något mer mottagliga för phishing.
137

Secure web applications against off-line password guessing attack : a two way password protocol with challenge response using arbitrary images

Lu, Zebin 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The web applications are now being used in many security oriented areas, including online shopping, e-commerce, which require the users to transmit sensitive information on the Internet. Therefore, to successfully authenticate each party of web applications is very important. A popular deployed technique for web authentication is the Hypertext Transfer Protocol Secure (HTTPS) protocol. However the protocol does not protect the careless users who connect to fraudulent websites from being trapped into tricks. For example, in a phishing attack, a web user who connects to an attacker may provide password to the attacker, who can use it afterwards to log in the target website and get the victim’s credentials. To prevent phishing attacks, the Two-Way Password Protocol (TPP) and Dynamic Two-Way Password Protocol (DTPP) are developed. However there still exist potential security threats in those protocols. For example, an attacker who makes a fake website may obtain the hash of users’ passwords, and use that information to arrange offline password guessing attacks. Based on TPP, we incorporated challenge responses with arbitrary images to prevent the off-line password guessing attacks in our new protocol, TPP with Challenge response using Arbitrary image (TPPCA). Besides TPPCA, we developed another scheme called Rain to solve the same problem by dividing shared secrets into several rounds of negotiations. We discussed various aspects of our protocols, the implementation and experimental results.
138

Åtgärder mot rådande nätfiskeattacker på sociala medier : En kvalitativ studie / Measures against prevailing phishing attacks on social media : A qualitative study

Pan, Enming, Ahmad, Al-Asadi January 2022 (has links)
Nätfiske på sociala medier kan få allvarliga konsekvenser för användare och organisationer. Det är dessutom en teknisk attack med en psykologisk aspekt som får mottagaren från ett nätfiskemeddelande att bete sig på ett specifikt sätt. Innehållet och leveransmetoden för nätfiskemeddelande kan förändras drastiskt. Forskarna avser att avgöra om nätfiske som en attack har förändrats över tid, hur och varför användare påverkas, användarnas säkerhetsmedvetenhet och tredje parts rekommendationer för skydd mot nätfiske. Kvalitativa strategier användes i denna studie främst för att fånga upp många variabler som kvantitativa strategier inte skulle göra, som att fånga upp respondenternas erfarenhet av nätfiske och ge ett nyanserat svar som bidrar till att besvara studiens forskningsfråga. Tema- och innehållsanalys användes i denna studie främst för att ge forskarna en systematisk arbetsprocess för att sålla och sortera data från primära och sekundära data. Dessa analysmetoder förenklade för forskarna vid bearbetning av data på grund av kodning, kategorisering och organisering av relevant data. Att jämföra primära och sekundära data kom med konsistens som fortfarande är utbredd idag. Alla respondenter och litteratur visar att alla budskap innehåller specifika faktorer som får offren att agera känslomässigt snarare än att tänka logiskt. Användare av sociala medier klickar ofta på okända länkar utan ytterligare övervägande eller ordentlig läsning. Studier har visat att nätfisketrenden har ökat på grund av hur billigt det är att skaffa de nödvändiga verktygen för att skicka nätfiskemeddelanden. Författarna har analyserat primärt och sekundärt datainnehåll i motåtgärder mot nätfiske för att sammanställa och uppdatera data för att presentera en lista över åtgärder mot aktuellt nätfiske. Författarens lista med vägledningar mot nätfiske kommer från resultaten av ackumuleringen av varje specifik studie, fragmenterad data och respondenternas syn på nätfiskesäkerhet. De viktigaste råden som författarna har identifierat innehåller tre vanliga teman som ett nätfiskemeddelande nästan alltid innehåller. Vissa, om inte alla, nätfiskemeddelanden kan innehålla följande brådska, girighet och rädsla. Genom att förstå dessa tre vanliga teman kan användarna bättre identifiera nätfiskemeddelanden. / Phishing on social media can cause severe consequences for users and organizations. It is also a technological attack with a psychological aspect that causes the receiver of a phishing message to behave in a specific manner. The content and delivery method of phishing messages can change drastically. The researcher intends to determine if phishing as an attack has changed over time, how and why users are affected, users' security awareness, and third party's recommendations for protection against phishing. Qualitative strategies were used in this study primarily to catch many variables that quantitative strategies wouldn't, such as finding respondents' experience of phishing and providing a nuanced response that contributes to answering the study's research question. Thematic and content analysis was used in this study primarily to give the researchers a systematic work process to sift and sort through data from primary and secondary data. These analysis methods simplified for the researchers when processing data due to coding, categorizing and organizing relevant data. Comparing primary and secondary data came with consistency that is still prevalent today. All respondents and literature show that all message contains specific factors that make victims act emotionally rather than thinking logically. Social media users often click unknown links without any further consideration or proper reading. Studies have shown that the phishing trend has increased due to how cheap it is to attain the necessary tools to send phishing messages. The authors have analyzed primary and secondary data content in countermeasures against phishing to compile and update data to present a list of measures against current phishing. The author's list of anti-phishing guidance comes from the results of the accumulation of each specific study, fragmented data, and respondents' views of phishing security. The essential advice the authors have identified contains three common themes a phishing message almost always contains. Some, if not all, phishing messages can contain the following urgency, greed, and fear. By understanding these three common themes, the users can better identify phishing messages.
139

Understanding and Combating Online Social Deception

Guo, Zhen 02 May 2023 (has links)
In today's world, online communication through social network services (SNSs) has become an essential aspect of people's daily lives. As social networking sites (SNSs) have become more sophisticated, cyber attackers have found ways to exploit them for harmful activities such as financial fraud, privacy violations, and sexual or labor exploitation. Thus, it is imperative to gain an understanding of these activities and develop effective countermeasures to build SNSs that can be trusted. The existing approaches have focused on discussing detection mechanisms for a particular type of online social deception (OSD) using various artificial intelligence (AI) techniques, including machine/deep learning (ML/DL) or text mining. However, fewer studies exist on the prevention and response (or mitigation) mechanisms for effective defense against OSD attacks. Further, there have been insufficient efforts to investigate the underlying intents and tactics of those OSD attackers through their in-depth understanding. This dissertation is motivated to take defense approaches to combat OSD attacks through the in-depth understanding of the psychological-social behaviors of attackers and potential victims, which can effectively guide us to take more proactive action against OSD attacks which can minimize potential damages to the potential victims as well as be cost-effective by minimizing or saving recovery cost. In this dissertation, we examine the OSD attacks mainly through two tasks, including understanding their causes and combating them in terms of prevention, detection, and mitigation. In the OSD understanding task, we investigate the intent and tactics of false informers (e.g., fake news spreaders) in propagating fake news or false information. We understand false informers' intent more accurately based on intent-related phrases from fake news contexts to decide on effective and efficient defenses (or interventions) against them. In the OSD combating task, we develop the defense systems following two sub-tasks: (1) The social capital-based friending recommendation system to guide OSN users to choose trustworthy users to defend against phishing attackers proactively; and (2) The defensive opinion update framework for OSN users to process their opinions by filtering out false information. The schemes proposed for combating OSD attacks contribute to the prevention, detection, and mitigation of OSD attacks. / Doctor of Philosophy / This Ph.D. dissertation explores the issue of online social deception (OSD) in the context of social networking services (SNSs). With the increasing sophistication of SNSs, cyber attackers have found ways to exploit them for harmful activities, such as financial fraud and privacy violations. While previous studies have focused on detection mechanisms using artificial intelligence (AI) techniques, this dissertation takes a defense approach by investigating the underlying psychological-social behaviors of attackers and potential victims. Through two tasks of understanding OSD causes and combating them through various AI approaches, this dissertation proposes a social capital-based friending recommendation system, a defensive opinion update framework, and a fake news spreaders' intent analysis framework to guide SNS users in choosing trustworthy users and filtering out phishing attackers or false information. The proposed schemes contribute to the prevention, detection, and mitigation of OSD attacks, potentially minimizing potential damages to potential victims and saving recovery costs.
140

E-crimes and e-authentication - a legal perspective

Njotini, Mzukisi Niven 27 October 2016 (has links)
E-crimes continue to generate grave challenges to the ICT regulatory agenda. Because e-crimes involve a wrongful appropriation of information online, it is enquired whether information is property which is capable of being stolen. This then requires an investigation to be made of the law of property. The basis for this scrutiny is to establish if information is property for purposes of the law. Following a study of the Roman-Dutch law approach to property, it is argued that the emergence of an information society makes real rights in information possible. This is the position because information is one of the indispensable assets of an information society. Given the fact that information can be the object of property, its position in the law of theft is investigated. This study is followed by an examination of the conventional risks that ICTs generate. For example, a risk exists that ICTs may be used as the object of e-crimes. Furthermore, there is a risk that ICTs may become a tool in order to appropriate information unlawfully. Accordingly, the scale and impact of e-crimes is more than those of the offline crimes, for example theft or fraud. The severe challenges that ICTs pose to an information society are likely to continue if clarity is not sought regarding: whether ICTs can be regulated or not, if ICTs can be regulated, how should an ICT regulatory framework be structured? A study of the law and regulation for regulatory purposes reveals that ICTs are spheres where regulations apply or should apply. However, better regulations are appropriate in dealing with the dynamics of these technologies. Smart-regulations, meta-regulations or reflexive regulations, self-regulations and co-regulations are concepts that support better regulations. Better regulations enjoin the regulatory industries, for example the state, businesses and computer users to be involved in establishing ICT regulations. These ICT regulations should specifically be in keeping with the existing e-authentication measures. Furthermore, the codes-based theory, the Danger or Artificial Immune Systems (the AIS) theory, the Systems theory and the Good Regulator Theorem ought to inform ICT regulations. The basis for all this should be to establish a holistic approach to e-authentication. This approach must conform to the Precautionary Approach to E-Authentication or PAEA. PAEA accepts the importance of legal rules in the ICT regulatory agenda. However, it argues that flexible regulations could provide a suitable framework within which ICTs and the ICT risks are controlled. In addition, PAEA submit that a state should not be the single role-player in ICT regulations. Social norms, the market and nature or architecture of the technology to be regulated are also fundamental to the ICT regulatory agenda. / Jurisprudence / LL. D.

Page generated in 0.0519 seconds