Spelling suggestions: "subject:"captcha"" "subject:"captchas""
1 |
Rozpoznávání CAPTCHA / CAPTCHA RecognitionKlika, Jan January 2014 (has links)
This thesis describes the design and implementation of an application for breaking the CAPTCHA. It also describes the history and evolution of CAPTCHA and the ways of its generating and possible techniques of its breaking. This thesis focuses on the new types of CAPTCHA, based on hard character segmentation. So the main target of this thesis is the design and implementation of the new segmentation method, allowing the recognition of modern CAPTCHAs, especially reCAPTCHA.
|
2 |
植基於圖像內涵之自動化人機區分機制 / A CAPTCHA Mechanism By Exchanging Image Blocks廖奕齊, Liao,I Chi Unknown Date (has links)
由於自動化程式的濫用越來越廣泛,因此擁有區分人與機器的能力也就日益重要。然而現在廣泛被運用的原文圖像(textual-image-based)CAPTCHA已經遭到破解。在此篇論文中,我們提出一個以交換圖片中不重疊區塊、簡單且有效的人機區分機制,利用簡單的幾個步驟就能產生出人類可以輕鬆通過但機器卻難以用自動化程式分析的測驗圖片,也同時針對此機制的強健度做了多方面的測試,實驗中也對於此機制所使用的參數選擇和圖像資料庫進行詳細的分析;最後我們設計了眼動儀實驗去比較不同的測驗類型所對應的視線軌跡。 / The need to tell human and machines apart has surged due to abuse of automated ‘bots’. However, several textual-image-based CAPTCHAs have been defeated recently. In this thesis, we propose a simple yet effective visual CAPTCHA test by exchanging the content of non-overlapping regions in an image. Using simple steps, the algorithm is able to produce a discrimination mechanism which is difficult for machine to analyze but easy for human to pass. We have tested the robustness of the proposed method by exploring different ways to attack this CAPTCHA and the corresponding counter-attack measures. Additionaly, we have carried out in-depth analysis regarding the choice of the parameters and the image database. Finally, eye-tracking experiments have been conducted to examine and compare the gaze paths for different visual tasks.
|
3 |
O uso de captchas de áudio no combate ao spam em telefonia IPTiago Tavares Madeira, Frederico 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T16:00:10Z (GMT). No. of bitstreams: 2
arquivo6087_1.pdf: 1872863 bytes, checksum: 5421c048c3358bc43e4f5ae9a04428fc (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2011 / Spam é o termo usado para referir-se aos e-mails não solicitados, que geralmente são enviados para um grande número de pessoas, e é hoje considerado um dos maiores problemas enfrentados na Internet. Com o aumento da disponibilização de banda larga, popularização de tecnologias de internet e o aumento da utilização e disponibilização de soluções baseadas em VoIP (Voice over IP), é esperado que um problema semelhante passe a afetar esta nova área. Esta ameaça é conhecida por SPIT (SPAM over Internet Telephony) e é definida como geração automatizada de chamadas não solicitadas utilizando como transporte o IP através do VoIP ao invés das tradicionais linhas telefônicas.
O potencial do SPIT em reduzir a produtividade é muito maior do que a do SPAM, porque no SPIT a utilização de tempo de uma pessoa já é contabilizada a partir do momento em que o telefone começa a tocar. Podemos acrescentar que o SPIT não consiste apenas no incômodo a um usuário, quando aplicado contra uma rede, pois pode consumir seus recursos dificultando ou ainda inviabilizar a utilização dos recursos da rede.
As características do SPIT são diferentes do SPAM, portanto não podemos aplicar as mesmas técnicas usadas no SPAM em ataques do tipo SPIT. Propomos neste trabalho uma ferramenta para identificar e proteger uma rede VoIP contra ataques de SPIT. Como visto em outros tipos de ameaças a redes de dados, a utilização de um único método não é suficiente para garantir a proteção e identificação dos ataques. Portanto, na nossa abordagem, a ferramenta desenvolvida faz uso de Testes Reversos de Turing formatados em um CAPTCHA no formato de uma mensagem de áudio, com o objetivo de identificar se a chamada é ou não um SPIT. Essa técnica é aplicada em conjunto com outras técnicas descritas ao longo do texto. Desta forma, é composta uma ferramenta de prevenção e identificação com a finalidade de garantir uma melhor proteção contra ataques de SPIT, em redes baseadas em VoIP
|
4 |
Uma abordagem para combate da fraude de clique baseada em CAPTCHAs clicáveisCOSTA , Rodrigo Alves 04 March 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-08-30T12:25:54Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Tese-RodrigoAlvesCosta.pdf: 3140049 bytes, checksum: ec578743ee4523a7f146fc645013c2cc (MD5) / Made available in DSpace on 2017-08-30T12:25:55Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Tese-RodrigoAlvesCosta.pdf: 3140049 bytes, checksum: ec578743ee4523a7f146fc645013c2cc (MD5)
Previous issue date: 2016-03-04 / No atual clima desafiador da economia global, anunciantes e suas agências de publicidade encontram-se em busca contínua por oportunidades de negócios que os levem a aumentar
significativamente a exposição de suas marcas e de seus produtos a custos cada vez menores. Recursos de marketing têm sido redirecionados a campanhas na Internet do tipo Pagamento Por
Clique (PPC), com remunerações associadas à realização de alguma ação potencialmente lucrativa para o anunciante, como o clique em um anúncio por parte de um usuário. Nessa
estrutura, um fraudador pode aparecer como alguém que busca aumentar os lucros de um publicador de anúncios ao clicar nos anúncios exibidos sem ter a real intenção de comprar os
produtos, ou ainda como uma empresa concorrente que clica nos anúncios de determinada organização para gerar custos indevidos e esse processo é exponencialmente mais oneroso para
as vítimas da fraude se for realizado de maneira automatizada, por meio de scripts e bots. Combater a fraude de clique é, portanto, de fundamental importância para a continuidade do
mercado de anúncios online, já que anunciantes mais conservadores acabam se afastando desta forma de fazer negócios por sua aversão a riscos e, no mundo ideal, no qual a fraude de clique
ou inexiste ou possui influência mínima no faturamento das companhias, é necessário desenvolver soluções computacionais que suportem o crescimento do mesmo. Nesse sentido,
esta pesquisa tem por objetivo produzir contribuições acadêmicas e desenvolver soluções com foco no negócio de PPC, incluindo a disponibilização de um material inédito em língua
portuguesa sobre a sistemática, histórico e evolução dos anúncios online, bem como suas fraudes, casos legais e formas de detecção, e a elaboração de uma abordagem de combate que
alia formas clássicas de resposta à fraude de clique com uma abordagem inovadora para a sua prevenção, baseada em CAPTCHA clicáveis, algo que não foi encontrado na literatura, nem no
mercado. Apresentamos o Google NoCAPTCHA reCAPTCHA como uma forma de prevenir a automatização da fraude de clique, o que caracteriza uma mudança de paradigma no combate à
fraude de clique, que passa a ser de prevenção proativa, ao contrário das abordagens clássicas, que buscavam identificar a fraude por meio de heurísticas de análise de histórico de cliques
após as mesmas já terem ocorrido. Com efeito, este trabalho contribui na aquisição de conhecimento no domínio desse negócio, através de um estudo extensivo de anúncios online,
apoiando a identificação de requisitos de sistemas e do negócio, e sua disposição de mercado. Como um benefício, tal estudo viabilizou a proposição de uma abordagem de segurança para
as redes de anúncios, inspirada em casos legais e no conjunto de métodos utilizados para fraudes de clique. / In the current challenging climate of the global economy, advertisers and their advertising
agencies are in continuous search for business opportunities that lead them to significantly
increase the exposure of their brands and their products at increasingly lower costs. Marketing
resources have been gradually redirected to marketing campaigns such as Pay Per Click (PPC),
which consists of associating payment to some potentially lucrative action for the advertiser,
such as clicking on an ad by a user. Within this market structure, a fraudster may appear as
someone who seeks to increase the profits of an ad publisher by merely clicking on the ads
displayed on the site of the publisher without real intention to buy the product, or as a competitor
clicking on the ads of a particular organization in order to generate undue costs. This entire
process is exponentially more costly for the victims if it is done in an automated manner through
scripts and bots, such that combating, or even preventing, click fraud is of fundamental
importance for the continuity of the online marketing advertising, as more conservative
advertisers end up avoiding this business structure for their risk aversion profile. To achieve the
ideal world, where click fraud either does not exist or has minimal impact on the companies
revenues, it is necessary to develop computational techniques and security solutions that
support the growth of this market. In this sense, this research aims to produce contributions and
develop solutions for online ads, focusing on the PPC business, including the provision of
academic material in the Portuguese language about the systematic, history and evolution of
online advertising as well as their frauds, legal cases and forms of dectecion, and the
development of an approach to combating click fraud that combines classic forms of detection
with an innovative approach to prevention, based on clickable CAPTCHA, something neither
found in the literature on past researchs nor on the current solutions available in the market.
Thus, the developed prototype introduced Google NoCAPTCHA reCAPTCHA as a way to
prevent automation of click fraud: by clicking on an ad, the user will need to respond to a
challenge and, once identified as a human, proceed. The authentication aspect ensures
continuous use of the system without the need to respond additional challenges through the use
of temporary coupons represented as cookies. This characterizes a paradigm shift in the fight
against click fraud, which happens to be from reactive detection to proactive prevention, since
traditional approaches used to identify fraud through heuristics that performed historical
analysis of clicks after the fraud has already occurred. Indeed, this work contributes to the
acquisition of knowledge of the business, through an extensive study of online ads, significantly
supporting the identification of business and system requirements, as well as their current
market provision. As a benefit, this study made it possible to propose a security approach to
online advertising networks, inspired by legal cases and the set of methods used for click fraud.
|
5 |
CNN MODEL FOR RECOGNITION OF TEXT-BASED CAPTCHAS AND ANALYSIS OF LEARNING BASED ALGORITHMS’ VULNERABILITIES TO VISUAL DISTORTIONAmiri Golilarz, Noorbakhsh 01 May 2023 (has links) (PDF)
Due to the rapid progress and advancements in deep learning and neural networks, manyapproaches and state-of-the-art researches have been conducted in these fields which cause developing various learning-based attacks leading to vulnerability of websites and portals. This kind of attacks decrease the security of the websites which results in releasing the sensitive and important personal information. These days, preserving the security of the websites is one of the most challenging tasks. CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is kind of test which are developed by designers and are available in various websites to distinguish and differentiate humans from robots in order to protect the websites from possible attacks. In this dissertation, we proposed a CNN based approach to attack and break text-based CAPTCHAs. The proposed method has been compared with several state-of-the-art approaches in terms of recognition accuracy (RA). Based on the results, the developed method can break and recognize CAPTCHAs at high accuracy. Additionally, we wanted to check how to make these CAPTCHAs hard to be broken, so we employed five types of distortions in these CAPTCHAs. The recognition accuracy in presence of these noises has been calculated. The results indicate that adversarial noise can make CAPTCHAs much difficult to be broken. The results have been compared with some state-of-the-art approaches. This analysis can be helpful for CAPTCHA developers to consider these noises in their developed CAPTCHAs. This dissertation also presents a hybrid model based on CNN-SVM to solve text-based CAPTCHAs. The developed method contains four main steps, namely: segmentation, feature extraction, feature selection, and recognition. For segmentation, we suggested using histogram and k-means clustering. For feature extraction, we developed a new CNN structure. The extracted features are passed through the mRMR algorithm to select the most efficient features. These selected features are fed into SVM for further classification and recognition. The results have been compared with several state-of-the-art methods to show the superiority of the developed approach. In general, this dissertation presented deep learning-based methods to solve text-based CAPTCHAs. The efficiency and effectiveness of the developed methods have been compared with various state-of-the-art methods. The developed techniques can break CAPTCHAs at high accuracy and also in a short time. We utilized Peak Signal to Noise Ratio (PSNR), ROC, accuracy, sensitivity, specificity, and precision to evaluate and measure the performance analysis of different methods. The results indicate the superiority of the developed methods.
|
6 |
Image understanding for automatic human and machine separationRomero Macias, Cristina January 2013 (has links)
The research presented in this thesis aims to extend the capabilities of human interaction proofs in order to improve security in web applications and services. The research focuses on developing a more robust and efficient Completely Automated Public Turing test to tell Computers and Human Apart (CAPTCHA) to increase the gap between human recognition and machine recognition. Two main novel approaches are presented, each one of them targeting a different area of human and machine recognition: a character recognition test, and an image recognition test. Along with the novel approaches, a categorisation for the available CAPTCHA methods is also introduced. The character recognition CAPTCHA is based on the creation of depth perception by using shadows to represent characters. The characters are created by the imaginary shadows produced by a light source, using as a basis the gestalt principle that human beings can perceive whole forms instead of just a collection of simple lines and curves. This approach was developed in two stages: firstly, two dimensional characters, and secondly three-dimensional character models. The image recognition CAPTCHA is based on the creation of cartoons out of faces. The faces used belong to people in the entertainment business, politicians, and sportsmen. The principal basis of this approach is that face perception is a cognitive process that humans perform easily and with a high rate of success. The process involves the use of face morphing techniques to distort the faces into cartoons, allowing the resulting image to be more robust against machine recognition. Exhaustive tests on both approaches using OCR software, SIFT image recognition, and face recognition software show an improvement in human recognition rate, whilst preventing robots break through the tests.
|
7 |
On Random Field CAPTCHA GenerationNewton, Fraser Unknown Date
No description available.
|
8 |
BOT management trough CAPTCHA : A study of which factors that make for the most usable, effective and unobtrusive CAPTCHA implementations / BOT-hantering med robotfilter : En studie av vilka faktorer som bidrar mest till användarvänlighet, effektivitet och diskrethet hos implementationer av robotfilterNorberg, Edward, Giscombe Schmidt, Adam January 2022 (has links)
CAPTCHA, which stands for completely automated public Turing test to tell computers and humans apart, is a system that can be implemented in a software application with the purpose of differentiating between bot users and humans users (the CAPTCHA system works in a way that a challenge is given to to the users of the application, were the challenge is easy to solve for humans users while it is difficult for bots). The idea is that a bot should not be able to access parts of the application that requires solving a CAPTCHA. With this arrangement, developers of these applications can ensure to the best of their ability that the only active users of their applications are real humans. Modern bots that target these application, however, are designed to be able to solve, and thus bypass these CAPTCHA challenges. This has, in turn, resulted in application owners implementing more and more complex CAPTCHA solutions into their applications, to try to make them unbeatable to bots. These more complex CAPTCHA solutions have the unfortunate site effect of also being more difficult for humans to solve. This is the trade off between the usability/obtrusiveness and effectiveness for these CAPTCHA systems. The goal in this study is to map out which factors that contribute the most to effectiveness and usability of existing CAPTCHA implementations, to provide an understanding about which characteristics that are the most beneficial to include in a CAPTCHA implementation, in regards to effectiveness, usability and obtrusiveness. / Ett robotfilter (CAPTCHA på engelska) är ett typ av mjukvarubaserat system som implementeras i digitala applikationer med syftet att urskilja botanvändare från mänskliga användare av applikationen. Sättet som ett robotfiltersystem fungerar är att det ger ett test till användarna av applikationen, som ska vara svårt för robotanvändare att lösa men enkelt för mänskliga användare. Idén bakom detta är att se till att delar av en applikation som är skyddade av robotfilter endast ska tillåta mänskligare användare att passera, därav namnet robotfilter. Nutidens robotar som är utvecklade för att attackera applikationer där ett robotfilter system är implementerat, är ofta utvecklade för att kunna lösa, och med det också genomtränga, dessa typer av robotfiltersystem. Detta har dock orsakat att applikationsägare nu implementerar mer och mer avanserade robotfilterlösningar för att försöka bibehålla att robotar inte kan genomtränga deras robotfiltersystem. Detta medför också biverkningen att mänskliga användare också får svårare att lösa robotfilterimplementatinoerna. Denna kapplöpning mellan robottilverkare och robotfiltertillverkare medför att det har uppstått en avvägning mellan robotfilterlösningar användarvänlighet, och deras effektivitet. Målet med den här studien är att undersöka vilka faktorer och egenskaper har störst positiv påverkan på effektiviteten och användarvänligheten hos robotfilter.
|
9 |
The users’ perspective and preference on three user interface website design patterns and their usabilityDimov, Ivan January 2016 (has links)
This study is qualitative and interpretive in nature. It examines the perception of 6 people aged 23-32 with decent experience in using the Web on the usability of three user interface website design patterns. These patterns are the ‘hamburger’ icon (an icon used primarily in mobile websites and apps that shows a hidden navigation when clicked), CAPTCHAs (a task that users have to complete to continue browsing a webpage to prevent automated software operating on the webpage) and returning to the homepage. It searches for the characteristics that they desire to see in those three user interface design patterns and the actions that those patterns represent. The participants are reached through interviews and observations and the research pinpoints that although experienced Internet users find the user interface elements relatively usable some usability factors can be worked upon in the chosen design elements and pinpoints what users would want to see changed, the actual changes they want and the problems they actually encounter with the current status of the three (3) design patterns and their usability. More noticeably, the research pinpoints that a “Homepage” button would be more usable than “Home” button which is the de facto standard as of this moment and it shows that the ‘hamburger’ icon is usable enough amongst experienced users, contradicting the research pinpointing that 71 out of 76 fail using the icon (Fichter and Wisniewski, 2016) probably due to the participants’ experience with technology, but other, preferable alternatives to the ‘hamburger’ icon are revealed from the participants which are in line with the current literature. CAPTCHAs are confirmed as a ‘nuisance’ (Pogue, 2012) and the need for CAPTCHAs which are quick to solve emerges which is what forms the perception of usability of the participants.
|
10 |
Using Novel Image-based Interactional Proofs and Source Randomization for Prevention of Web BotsShardul Vikram 2011 December 1900 (has links)
This work presents our efforts on preventing the web bots to illegitimately access web resources. As the first technique, we present SEMAGE (SEmantically MAtching imaGEs), a new image-based CAPTCHA that capitalizes on the human ability to define and comprehend image content and to establish semantic relationships between them. As the second technique, we present NOID - a "NOn-Intrusive Web Bot Defense system" that aims at creating a three tiered defence system against web automation programs or web bots. NOID is a server side technique and prevents the web bots from accessing web resources by inherently hiding the HTML elements of interest by randomization and obfuscation in the HTML responses.
A SEMAGE challenge asks a user to select semantically related images from a given image set. SEMAGE has a two-factor design where in order to pass a challenge the user is required to figure out the content of each image and then understand and identify semantic relationship between a subset of them. Most of the current state-of-the-art image-based systems like Assira only require the user to solve the first level, i.e., image recognition. Utilizing the semantic correlation between images to create more secure and user-friendly challenges makes SEMAGE novel. SEMAGE does not suffer from limitations of traditional image-based approaches such as lacking customization and adaptability. SEMAGE unlike the current Text based systems is also very user friendly with a high fun factor. We conduct a first of its kind large-scale user study involving 174 users to gauge and compare accuracy and usability of SEMAGE with existing state-of-the-art CAPTCHA systems like reCAPTCHA (text-based) and Asirra (image-based). The user study further reinstates our points and shows that users achieve high accuracy using our system and consider our system to be fun and easy.
We also design a novel server-side and non-intrusive web bot defense system, NOID, to prevent web bots from accessing web resources by inherently hiding and randomizing HTML elements. Specifically, to prevent web bots uniquely identifying HTML elements for later automation, NOID randomizes name/id parameter values of essential HTML elements such as "input textbox", "textarea" and "submit button" in each HTTP form page. In addition, to prevent powerful web bots from identifying special user-action HTML elements by analyzing the content of their accompanied "label text" HTML tags, we enhance NOID by adding a component, Label Concealer, which hides label indicators by replacing "label text" HTML tags with randomized images. To further prevent more powerful web bots identifying HTML elements by recognizing their relative positions or surrounding elements in the web pages, we enhance NOID by adding another component, Element Trapper, which obfuscates important HTML elements' surroundings by adding decoy elements without compromising usability.
We evaluate NOID against five powerful state-of-the-art web bots including XRumer, SENuke, Magic Submitter, Comment Blaster, and UWCS on several popular open source
web platforms including phpBB, Simple Machine Forums (SMF), and Wordpress. According to our evaluation, NOID can prevent all these web bots automatically sending spam on these web platforms with reasonable overhead.
|
Page generated in 0.0438 seconds