• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 48
  • 27
  • 16
  • 14
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Συγκριτική μελέτη και πειραματική επιβεβαίωση γνωσιακών αρχιτεκτονικών

Χάλκου, Χαρά 09 January 2012 (has links)
Ορισμένοι ερευνητές σήμερα αναζητούν τρόπους αξιοποίησης των υπολογιστών και του διαδικτύου για την υποστήριξη σύγχρονων συνεργατικών δραστηριοτήτων, δηλαδή συνεργασία μέσω υπολογιστή δύο ή περισσοτέρων χρηστών στον ίδιο χρόνο. Η σύγχρονη συνεργατική δραστηριότητα υποστηρίζεται συνήθως από εργαλεία που αποτελούνται από έναν κοινόχρηστο χώρο εργασίας, στον οποίο δύο ή περισσότεροι χρήστες εργάζονται, καθώς βρίσκονται σε απομακρυσμένα σημεία και ένα χώρο, στον οποίο οι χρήστες μπορούν να συνομιλούν. Η αξιολόγηση της αποδοτικότητας τέτοιων εργαλείων μπορεί να είναι δαπανηρή και χρονοβόρα διαδικασία, αφού απαιτούνται πολλοί πόροι (χρήστες, πραγματικές συνθήκες εργασίας κλπ) για την διεξαγωγή των κατάλληλων μελετών. Αντί αυτών έχουν αναπτυχθεί μέθοδοι που επιτρέπουν την αξιολόγηση των συνεργατικών εργαλείων χρησιμοποιώντας μοντέλα ανθρώπινου επεξεργαστή. Μοντέλα του ανθρώπινου επεξεργαστή HIP (Human Information Processing model) χρησιμοποιούνται για να προσεγγίσουν την ανθρώπινη συμπεριφορά ατόμων που δουλεύουν σε κοινό χώρο εργασίας. Στα πλαίσια της παρούσας διπλωματικής γίνεται χρήση μοντέλων ανθρώπινου επεξεργαστή για την αξιολόγηση σύγχρονης συνεργασίας που υλοποιείται στον κοινό χώρο εργασίας μίας εφαρμογής υποστήριξης συνεργασίας (Synergo). Για τις ποσοτικές προβλέψεις την πρώτη φορά θα χρησιμοποιηθεί το μοντέλο πληκτρολογήσεων KLM (Keystroke Level Model KLM), το οποίο αναπαριστά τον χρήστη σαν να έχει νοητικούς, κινητικούς και γνωσιακούς επεξεργαστές και την δεύτερη, το Cog Tool, το οποίο χρησιμοποιεί μια γνωσιακή αρχιτεκτονική που ονομάζεται ACT-R για να προσομοιώσει την κινητική, αισθητηριακή και γνωστική συμπεριφορά των ανθρώπων που αλληλεπιδρούν με το πρωτότυπο για να ολοκληρώσουν τις εργασίες τους (tasks), τις οποίες έχει ορίσει ο σχεδιαστής της διεπιφάνειας χρήστη UI (User Interface). Μολονότι το KLM και το Cog Tool παράγουν ποσοτικές προβλέψεις για έναν χρήστη, στην διπλωματική που θα ακολουθήσει θα μελετηθεί ο τρόπος, με τον οποίο αυτά τα εργαλεία μπορούν να προσομοιώσουν την συνεργασία δύο χρηστών. Αφού συγκριθούν τα μοντέλα χρησιμοποιώντας αρχικά το KLM και μετέπειτα το Cog Tool, παρουσιάζονται τα συμπεράσματα που προκύπτουν, ενώ στη συνέχεια, προτείνονται ορισμένες μελλοντικές προεκτάσεις. / Nowadays some researchers search out ways to exploit computers and internet to support synchronous collaborative activities, which is cooperation between two or more users through a computer at the same time. The synchronous collaborative activity is being supported usually with tools which are consisted of a shared workspace, in which two or more users working, while they are in different places and a space, in which the users can chat. The evaluation of the shared workspace can be quite expensive and time consuming because of the requirement of resource consumption (users, real working situations etc) in order to make the appropriate studies. Instead of this, methods have been developed which allows the evaluation of the collaborative tools using Human Information Processing Models. Human Information Processing Models are used to approximate the human behavior of users working in shared workspace. Within this thesis one alternative method is represented, using as shared workspace reference the user interface of Synergo. For quantitative predictions, first, Keystroke Level Model (KLM) is being used, which represents the user as having motor and cognitive processors and subsequently the Cog Tool, which uses a cognitive architecture called ACT-R to simulate the kinetic, sensory and cognitive behavior of people who interact with the prototype to complete their tasks as they are specified by the designer of the user interface (UI). Although KLM and Cog Tool are producing quantitatively predictions for one user, within this thesis will be studied how these two tools can work for two-user cooperation. After the comparison of models using first KLM and then Cog Tool, the results, which are being deducted, are presented. In addition, some future extensions are suggested.
22

Identity Verification using Keyboard Statistics. / Identitetsverifiering med användning av tangentbordsstatistik.

Mroczkowski, Piotr January 2004 (has links)
In the age of a networking revolution, when the Internet has changed not only the way we see computing, but also the whole society, we constantly face new challenges in the area of user verification. It is often the case that the login-id password pair does not provide a sufficient level of security. Other, more sophisticated techniques are used: one-time passwords, smart cards or biometric identity verification. The biometric approach is considered to be one of the most secure ways of authentication. On the other hand, many biometric methods require additional hardware in order to sample the corresponding biometric feature, which increases the costs and the complexity of implementation. There is however one biometric technique which does not demand any additional hardware – user identification based on keyboard statistics. This thesis is focused on this way of authentication. The keyboard statistics approach is based on the user’s unique typing rhythm. Not only what the user types, but also how she/he types is important. This report describes the statistical analysis of typing samples which were collected from 20 volunteers, as well as the implementation and testing of the identity verification system, which uses the characteristics examined in the experimental stage.
23

Identitetsverifiering via tangentbordsstatistik / Identityverification through keyboardstatistics

Demir, Georgis January 2002 (has links)
One important issue faced by companies is to secure their information and resources from intrusions. For accessing a resource almost every system uses the approach of assigning a unique username and a password to all legitimate users. This approach has a major drawback. If an intruder gets the above information then he can become a big threat for the company and its resources. To strengthen the computer security there are several biometric methods for identity verification which are based on the human body’s unique characteristics and behavior including fingerprints, face recognition, retina scan and signatures. However most of these techniques are expensive and requires the installation of additional hardware. This thesis focuses on keystroke dynamics as an identity verifier, which are based on the user’s unique habitual typing rhythm. This technique is not just looking for what the user types but also how he types. This method does not require additional hardware to be installed and are therefore rather inexpensive to implement. This thesis will discuss how identity verification through keystroke characteristics can be made, what have been done in this area and give advantages and disadvantages of the technique.
24

Feature learning with deep neural networks for keystroke biometrics : A study of supervised pre-training and autoencoders

Hellström, Erik January 2018 (has links)
Computer security is becoming an increasingly important topic in today’s society, withever increasing connectivity between devices and services. Stolen passwords have thepotential to cause severe damage to companies and individuals alike, leading to therequirement that the security system must be able to detect and prevent fraudulentlogin. Keystroke biometrics is the study of the typing behavior in order to identifythe typist, using features extracted during typing. The features traditionally used inkeystroke biometrics are linear combinations of the timestamps of the keystrokes.This work focuses on feature learning methods and is based on the Carnegie Mellonkeystroke data set. The aim is to investigate if other feature extraction methods canenable improved classification of users. Two methods are employed to extract latentfeatures in the data: Pre-training of an artificial neural network classifier and an autoencoder. Several tests are devised to test the impact of pre-training and compare theresults of a similar network without pre-training. The effect of feature extraction withan autoencoder on a classifier trained on the autoencoder features in combination withthe conventional features is investigated.Using pre-training, I find that the classification accuracy does not improve when using an adaptive learning rate optimizer. However, when a stochastic gradient descentoptimizer is used the accuracy improves by about 8%. Used in conjunction with theconventional features, the features extracted with an autoencoder improve the accuracyof the classifier with about 2%. However, a classifier based on the autoencoder featuresalone is not better than a classifier based on conventional features.
25

[en] A PSYCHOLINGUISTIC INVESTIGATION OF WRITING IN L1 AND L2: A STUDY WITH ENGLISH TEACHERS / [pt] UMA INVESTIGAÇÃO PSICOLINGUÍSTICA DA ESCRITA EM L1 E L2: UM ESTUDO COM PROFESSORES DE INGLÊS

RACHEL DA COSTA MURICY 23 November 2023 (has links)
[pt] A presente dissertação aborda a escrita bilíngue – Português como L1 e Inglês como L2, a partir de uma perspectiva cognitiva, com vistas a buscar caracterizar, de forma integrada, o processo e o produto da escrita, e possíveis correlações entre desempenho em escrita e aspectos atencionais. Participam da pesquisa 15 professores de língua inglesa (10 mulheres e 5 homens), idade média de 43,5 anos (DP 13,25), nativos do Português brasileiro. No estudo, foram empregadas ferramentas computacionais que possibilitam o registro das ações de escrita no curso da produção textual de textos argumentativos (programa Inputlog), a análise automática de características linguísticas do texto final (Nilc-Metrix (L1) e Coh-Metrix (L2) e a verificação de padrões de conectividade no texto final, por meio de atributos de grafos (SpeechGraphs). Adotou-se também o teste ANT - Attention Network Test com o intuito de ampliar a reflexão a respeito de fatores cognitivos e possíveis influências na produção textual. Na análise do processo de escrita, foram examinados tanto padrões de pausa como operações de escrita ativa e ações de revisão (inserções e apagamentos). Na análise do produto, consideraram-se parâmetros ligados a aspectos vocabulares, semânticos, sintáticos e índices de legibilidade, e informações sobre recorrência lexical e conectividade entre palavras. No que tange ao processo, os resultados do estudo revelaram diferenças entre as duas línguas, com valores mais altos associados à escrita em Inglês, para (i) pausas no interior de palavras - possivelmente sinalizando uma demanda de ordem ortográfica - e (ii) percentual de escrita ininterrupta, indicando uma escrita com menos interrupções, com menor número de alterações/revisões. O estudo de correlação revelou que os participantes apresentam o mesmo perfil de escrita na L1 e na L2. Na análise do produto por meio do Coh-Metrix (Inglês) e Nilc-Metrix (Português), verificou-se, por meio de índice de legibilidade, que os textos apresentam complexidade moderada nas duas línguas. A despeito de diferenças em como as métricas são definidas em cada Programa, os resultados sugerem que os textos em Português apresentam graus de complexidade que se correlacionam com aspectos sintáticos (como número de palavras antes do verbo principal e índice de Flesch) e semânticos (grau de concretude). Na L2, destaca-se que a diversidade lexical permanece sendo um dos indicadores mais confiáveis de proficiência e graus de complexidade, correlacionando-se com comportamentos de pausas (antes de palavras) e revisão (normal production). Em relação ao SpeechGraphs, foram observadas diferenças significativas entre os textos na L1 e na L2 para quase todos os atributos de grafos analisados, o que é interpretado como um reflexo da forma como o programa lida com características morfológicas das duas línguas. Não foram observadas correlações entre o comportamento dos falantes na L1 e na L2. Foram ainda conduzidos estudos de correlação entre os dados do Inputlog e os das ferramentas Coh-Metrix e Nilc-Metrix e entre estas e os dados do SpeechGraphs. Nos dois estudos, observou-se uma correspondência entre parâmetros indicativos de complexidade das ferramentas utilizadas, sugerindo um caminho relevante de exploração de análise integrada processo-produto para trabalhos futuros. Em relação ao estudo de correlação entre dados do Inputlog e do ANT, destacaram-se as correlações entre acurácia e tempo de reação nas condições experimentais e os percentuais de apagamentos. Os presentes achados abrem caminho e trazem contribuições significativas para o campo da psicolinguística no âmbito da pesquisa entre L1 e L2. / [en] This dissertation addresses bilingual writing – Portuguese as L1 and English as L2 – from a cognitive perspective, aiming to characterize both the writing process and the final product in an integrated manner and explore correlations between writing performance and attentional aspects. The research involves 15 English language teachers (10 women and 5 men) with an average age of 43.5 years (SD 13.25), native speakers of Brazilian Portuguese. The study utilized computational tools to record writing actions during the production of argumentative texts (Inputlog program), automatically analyzed linguistic aspects from the text (Nilc-Metrix program for Portuguese and Coh-Metrix for English) and verify connectivity patterns in the final text using graph attributes (SpeechGraphs program). The Attention Network Test (ANT) was also adopted. In the analysis of the writing process, patterns of pauses, active writing operations, and revision actions (insertions and deletions) were examined. In the product analysis, parameters related to vocabulary, semantics, syntax, readability índices, as well as information on lexical recurrence and word connectivity, were considered. Regarding the writing process, the results of the study revealed differences between the two languages, with higher values associated with writing in English, particularly in terms of (i) pauses within words, indicating orthographic demands, and (ii) the percentage of uninterrupted writing, suggesting less interruption and fewer alterations/revisions. Correlation analysis indicated that participants exhibited a similar writing profile in both L1 and L2. In the product analysis using Coh-Metrix (English) and Nilc-Metrix (Portuguese), it was found, through readability índices, that the texts exhibited moderate complexity in both languages. Despite differences in how metrics are defined in each program, the results suggest that texts in Portuguese show a higher level of complexity when considering syntactic aspects (such as the number of words before main verbs) and semantic aspects (concreteness degree). For L2, lexical diversity remains one of the most reliable proficiency indicators, correlating with pause behavior (before words) and revision (normal production). Regarding SpeechGraphs, significant differences were observed between texts in L1 and L2 for almost all analyzed graph attributes, reflecting how the program deals with morphological characteristics of the two languages. No correlations were observed between the behavior of speakers in L1 and L2. Additionally, correlation studies were conducted between Inputlog data and Coh-Metrix and Nilc-Metrix tools, as well as between these tools and Speech Graph data. In both studies, a correspondence was observed between parameters indicative of complexity in the tools used, suggesting a relevant path for exploring integrated process-product analysis in future research. Regarding the correlation study between Inputlog and ANT data, notable correlations emerged between accuracy and reaction time in experimental conditions and percentages of deletions. These findings pave the way for significant contributions to the field of psycholinguistics in the context of research between L1 and L2.
26

Įvesties duomenų analizė tapatybės vagysčių prevencijai / Keystroke analysis for identity theft prevention

Ruškys, Vaidas 17 June 2010 (has links)
Šiame darbe aptariamos vartotojų internete tykančios grėsmės, susijusios su tapatybės vagystėmis. Aptariamos slaptažodžių žvejybos bei MITM atakos ir jų veikimo principai. Problemos sprendimui siūloma naudoti vieną iš biometrijos dalių - klavišų paspaudimo analizę. Pagrindinis darbo tikslas - atlikus tyrimą nustatyti, ar galima naudojant klavišų paspaudimo analizės metodą sėkmingai sumažinti tapatybės vagystės tikimybę. Pateikiami tyrimo rezultatai naudojant skirtingai veikiančias programas, naudojančias klavišų paspaudimo analizės metodą. Analizuojama, kaip klavišų paspaudimo analizės panaudojimo galimybė kinta keičiant tam tikras analizės sudedamąsias dalis. / This paper analyzes vulnerabilities that internet users face on the internet, which are related to identity theft. Phishing and MITM attacks and their principals are described. For solving this problem is suggested one part of biometrics-Keystroke analysis. Goal of this paper is to analyze possibility to reduce probability of these attacks by using Keystroke analysis. The results of using different types of programs using Keystroke analysis are presented. Analysis off how possibility of usability to use Keystroke analysis differs by changing different parts of Keystroke analysis.
27

Autenticação biometrica via teclado numerico baseada na dinamica da digitação : experimentos e resultados / Biometric authentication through numerical keyboard based on keystroke dynamics : experiments and results

Costa, Carlos Roberto do Nascimento 26 January 2006 (has links)
Orientadores: João Baptista Tadanobu Yabu-uti, Lee Luan Ling / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-06T06:55:05Z (GMT). No. of bitstreams: 1 Costa_CarlosRobertodoNascimento_M.pdf: 1033726 bytes, checksum: 1f87381d74a3e8cd3f4aec2d731d2044 (MD5) Previous issue date: 2006 / Resumo: Este trabalho apresenta uma nova abordagem para autenticação biométrica de usuários baseada em seu ritmo de digitação em teclados numéricos. A metodologia proposta é de baixo custo, nãointrusiva e pode ser aplicada tanto a um mecanismo de login em controle de acesso a áreas restritas como na melhoria do nível de segurança em transações bancárias. Inicialmente, o usuário indica a conta a ser acessada por meio de uma cadeia de caracteres digitada que é monitorada em tempo real pelo sistema. Simultaneamente, são capturados os tempos de pressionamento e soltura das teclas. Quatro características são extraídas do sinal: Código ASCII (American Standard Code for lnformation lnterchange) da tecla, duas latências e uma duração associada com a tecla. Alguns experimentos foram feitos usando amostras reais de usuários autênticos e impostores e um classificador de padrões baseado na estimação da máxima verossimilhança. Alguns aspectos experimentais foram analisados para verificar os seus impactos nos resultados. Estes aspectos são as características extraídas do sinal, a informação alvo, o conjunto de treinamento usado na obtenção dos modelos dos usuários, a precisão do tempo de captura das entradas, o mecanismo de adaptação do modelo e, finalmente, a técnica de obtenção do limiar ótimo para cada usuário. Esta nova abordagem traz melhorias ao processo de autenticação pois permite que a senha não seja mais segredo, assim como oferece uma opção para autenticação biométrica em dispositivos móveis, como celulares / Abstract: This work presents a new approach for biometric user authentication based on keystroke dynamics in numerical keyboards. The methodology proposed is low cost, unintrusive and could be applied in a login mechanism of access control to restricted area andJor to improve the security level in Automatic Teller Machines (ATM). Initially, the user indicates the account to be accessed by typing the target string that is monitored in real time by the system. Simultaneously, the times of key pressed and key released are captured. Four features are extracted from this input: The key ASCII code, two associated latencies and key durations, and some experiments using samples for genuines and impostors users were performed using a pattern classification technique based on the maximum likelihood estimation. Some experimental aspects had been analyzed to verify its impacts in the results. These aspects are the sets of features extracted from the signal, the set of training samples used to obtain the models, the time precisions where captures the inputs, the adaptation mechanism of the model and, finally, the technique to attainment of the excellent threshold for each user. This new approach brings improvements to the process of user authentication since it allows the password not to be a secret anymore, as well as it allows to include biometric authentication in mobile devices, such as cell phones / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
28

Password protection by analyzed keystrokes : Using Artificial Intelligence to find the impostor

Danilovic, Robert, Svensson, Måns January 2021 (has links)
A literature review was done to find that there are still issues with writing passwords. From the information gathered, it is stated that using keystroke characteristics could have the potential to add another layer of security to compromised user accounts. The world has become more and more connected and the amount of people who store personal information online or on their phones has steadily increased. In this thesis, a solution is proposed and evaluated to make authentication safer and less intrusive. Less intrusive in this case means that it does not require cooperation from the user, it just needs to capture data from the user in the background. As authentication methods such as fingerprint scanning and facial recognition are becoming more popular this work is investigating if there are any other biometric features for user authentication.Employing Artificial Intelligence, extra sensor metrics and Machine Learning models with the user's typing characteristics could be used to uniquely identify users. In this context the Neural Network and Support Vector Machine algorithms have been examined, alongside the gyroscope and the touchscreen sensors. To test the proposed method, an application has been built to capture typing characteristics for the models to train on. In this thesis, 10 test subjects were chosen to type a password multiple times so that they would generate the data. After the data was gathered and pre-processed an analysis was conducted and sent to train the Machine Learning models. This work's proposed solution and presented data serve as a proof of concept that there are additional sensors that could be used to authenticate users, namely the gyroscope. Capturing typing characteristics of users, our solution managed to achieve a 97.7% accuracy using Support Vector Machines in authenticating users.
29

Dynamic Template Adjustment in Continuous Keystroke Dynamics / Dynamic Template Adjustment in Continuous Keystroke Dynamics

Kulich, Martin January 2015 (has links)
Dynamika úhozů kláves je jednou z behaviorálních biometrických charakteristik, kterou je možné použít pro průběžnou autentizaci uživatelů. Vzhledem k tomu, že styl psaní na klávesnici se v čase mění, je potřeba rovněž upravovat biometrickou šablonu. Tímto problémem se dosud, alespoň pokud je autorovi známo, žádná studie nezabývala. Tato diplomová práce se pokouší tuto mezeru zaplnit. S pomocí dat o časování úhozů od 22 dobrovolníků bylo otestováno několik technik klasifikace, zda je možné je upravit na online klasifikátory, zdokonalující se bez učitele. Výrazné zlepšení v rozpoznání útočníka bylo zaznamenáno u jednotřídového statistického klasifikátoru založeného na normované Euklidovské vzdálenosti, v průměru o 23,7 % proti původní verzi bez adaptace, zlepšení však bylo pozorováno u všech testovacích sad. Změna míry rozpoznání správného uživatele se oproti tomu různila, avšak stále zůstávala na přijatelných hodnotách.
30

Identifying the role of remote display Protocol in behavioral biometric systems based on free-text keystroke dynamics, an experiment

Silonosov, Alexandr January 2020 (has links)
The ubiquity and speed of Internet access led over the past decade to an exponential increase in the use of thin clients and cloud computing, both taking advantage of the ability to remotely provide computing resources. The work investigates the role of remote display Protocol in behavioral biometric systems based on free-text keystroke dynamics. Authentication based on keystroke dynamics is easy in use, cheap, invisible for user and does not require any additional sensor.I n this project I will investigate how network characteristics affect the keystroke dynamics pattern in remote desktop scenario. Objectives: The aim of this project is to investigate the role of remote display Protocol in behavioral biometric system based on free-text keystroke dynamics, by measuring how network characteristics influence the computation of keystroke pattern in Virtual Desktop Infrastructure (VDI). Method: This thesis will answer all of its research question with the help of a Systematic Literature Review (SLR) and an Experiment. Literature review was conducted to gather information about the keystroke dynamics analysis, the applied algorithms and their performance; and to clarify the controlled changes of networking performance in VDI based scenario. Using the acquired knowledge, implemented keystroke dynamics pattern algorithm based on Euclidian distance statistical method, designed an experiment and performed a series of tests, in order to identify the influence of remote display protocol to keystroke pattern. Results: Through the SLR, keystroke dynamics analysis working structure is identified and illustrated, essential elements are summarized, and a statistical approach based on Euclidian distance is described; a technique to simulate and measure networklatency in VDI scenario is described including essential elements and parameters of VDI testbed. Keystroke analysis algorithm, dataset replication code and VDItestbed are implemented. The controlled experiment provided measurements of the metrics of the algorithm and network performance mentioned in objectives. Conclusions: During experimentation, I found that timing pattern in the keystroke dynamics data is affected by VDI in normal network conditions by 12% in average. Higher latency standard deviation, jitter, packet loss as well as remote display protocol overheads have a significant combined impact onto keystroke pattern. Moreover I found what maximum possible delay values does not affect keystroke pattern in a larger extent.

Page generated in 0.0382 seconds