• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 7
  • 6
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 106
  • 106
  • 60
  • 60
  • 24
  • 23
  • 19
  • 14
  • 13
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Création et évaluation statistique d'une nouvelle de générateurs pseudo-aléatoires chaotiques / Creation and statistical evaluation of a new pseudo-random generators chaotic

Wang, Qianxue 27 March 2012 (has links)
Dans cette thèse, une nouvelle manière de générer des nombres pseudo-aléatoires est présentée.La proposition consiste à mixer deux générateurs exitants avec des itérations chaotiquesdiscrètes, qui satisfont à la définition de chaos proposée par Devaney. Un cadre rigoureux estintroduit, dans lequel les propriétés topologiques du générateur résultant sont données. Deuxréalisations pratiques d’un tel générateur sont ensuite présentées et évaluées. On montre que lespropriétés statistiques des générateurs fournis en entrée peuvent être grandement améliorées enprocédant ainsi. Ces deux propositions sont alors comparées, en profondeur, entre elles et avecun certain nombre de générateurs préexistants. On montre entre autres que la seconde manièrede mixer deux générateurs est largement meilleure que la première, à la fois en terme de vitesseet de performances.Dans la première partie de ce manuscrit, la fonction d’itérations considérée est la négation vectorielle.Dans la deuxième partie, nous proposons d’utiliser des graphes fortement connexescomme critère de sélection de bonnes fonctions d’itérations. Nous montrons que nous pouvonschanger de fonction sans perte de propriétés pour le générateur obtenu. Finalement, une illustrationdans le domaine de l’information dissimulée est présentée, et la robustesse de l’algorithmede tatouage numérique proposé est évalué. / In this thesis, a new way to generate pseudorandom numbers is presented. The propositionis to mix two exiting generators with discrete chaotic iterations that satisfy the Devaney’sdefinition of chaos. A rigorous framework is introduced, where topological properties of theresulting generator are given, and two practical designs are presented and evaluated. It is shownthat the statistical quality of the inputted generators can be greatly improved by this way, thusfulfilling the up-to-date standards. Comparison between these two designs and existing generatorsare investigated in details. Among other things, it is established that the second designedtechnique outperforms the first one, both in terms of performance and speed.In the first part of this manuscript, the iteration function embedded into chaotic iterations isthe vectorial Boolean negation. In the second part, we propose a method using graphs havingstrongly connected components as a selection criterion.We are thus able to modify the iterationfunction without deflating the good properties of the associated generator. Simulation resultsand basic security analysis are then presented to evaluate the randomness of this new family ofpseudorandom generators. Finally, an illustration in the field of information hiding is presented,and the robustness of the obtained data hiding algorithm against attacks is evaluated.
82

DNS traffic based classifiers for the automatic classification of botnet domains

Stalmans, Etienne Raymond January 2014 (has links)
Networks of maliciously compromised computers, known as botnets, consisting of thousands of hosts have emerged as a serious threat to Internet security in recent years. These compromised systems, under the control of an operator are used to steal data, distribute malware and spam, launch phishing attacks and in Distributed Denial-of-Service (DDoS) attacks. The operators of these botnets use Command and Control (C2) servers to communicate with the members of the botnet and send commands. The communications channels between the C2 nodes and endpoints have employed numerous detection avoidance mechanisms to prevent the shutdown of the C2 servers. Two prevalent detection avoidance techniques used by current botnets are algorithmically generated domain names and DNS Fast-Flux. The use of these mechanisms can however be observed and used to create distinct signatures that in turn can be used to detect DNS domains being used for C2 operation. This report details research conducted into the implementation of three classes of classification techniques that exploit these signatures in order to accurately detect botnet traffic. The techniques described make use of the traffic from DNS query responses created when members of a botnet try to contact the C2 servers. Traffic observation and categorisation is passive from the perspective of the communicating nodes. The first set of classifiers explored employ frequency analysis to detect the algorithmically generated domain names used by botnets. These were found to have a high degree of accuracy with a low false positive rate. The characteristics of Fast-Flux domains are used in the second set of classifiers. It is shown that using these characteristics Fast-Flux domains can be accurately identified and differentiated from legitimate domains (such as Content Distribution Networks exhibit similar behaviour). The final set of classifiers use spatial autocorrelation to detect Fast-Flux domains based on the geographic distribution of the botnet C2 servers to which the detected domains resolve. It is shown that botnet C2 servers can be detected solely based on their geographic location. This technique is shown to clearly distinguish between malicious and legitimate domains. The implemented classifiers are lightweight and use existing network traffic to detect botnets and thus do not require major architectural changes to the network. The performance impact of implementing classification of DNS traffic is examined and it is shown that the performance impact is at an acceptable level.
83

A framework for high speed lexical classification of malicious URLs

Egan, Shaun Peter January 2014 (has links)
Phishing attacks employ social engineering to target end-users, with the goal of stealing identifying or sensitive information. This information is used in activities such as identity theft or financial fraud. During a phishing campaign, attackers distribute URLs which; along with false information, point to fraudulent resources in an attempt to deceive users into requesting the resource. These URLs are made obscure through the use of several techniques which make automated detection difficult. Current methods used to detect malicious URLs face multiple problems which attackers use to their advantage. These problems include: the time required to react to new attacks; shifts in trends in URL obfuscation and usability problems caused by the latency incurred by the lookups required by these approaches. A new method of identifying malicious URLs using Artificial Neural Networks (ANNs) has been shown to be effective by several authors. The simple method of classification performed by ANNs result in very high classification speeds with little impact on usability. Samples used for the training, validation and testing of these ANNs are gathered from Phishtank and Open Directory. Words selected from the different sections of the samples are used to create a `Bag-of-Words (BOW)' which is used as a binary input vector indicating the presence of a word for a given sample. Twenty additional features which measure lexical attributes of the sample are used to increase classification accuracy. A framework that is capable of generating these classifiers in an automated fashion is implemented. These classifiers are automatically stored on a remote update distribution service which has been built to supply updates to classifier implementations. An example browser plugin is created and uses ANNs provided by this service. It is both capable of classifying URLs requested by a user in real time and is able to block these requests. The framework is tested in terms of training time and classification accuracy. Classification speed and the effectiveness of compression algorithms on the data required to distribute updates is tested. It is concluded that it is possible to generate these ANNs in a frequent fashion, and in a method that is small enough to distribute easily. It is also shown that classifications are made at high-speed with high-accuracy, resulting in little impact on usability.
84

Návrh optimálního řešení pro zavedení elektronického obchodu v podniku / Proposal of optimal solution for start internet commerce in a company

Polívka, Antonín January 2008 (has links)
This diploma work tried to analyse problems concerned with proposal of internet commerce in a firm and especially it includes the proposal of optimal solution for start internet commerce in a company.
85

Bezpečnostní rizika elektronického obchodování / The Security Risks of E-commerce

Bauer, Oldřich January 2009 (has links)
This thesis is concerned with the security risks in company and it is oriented to risks connected with electronic commerce. It describes these technologies a gives comprehensive view to security risks, designed for concrete firm in practice.
86

Systém bezpečnosti informací ve firmě / Company Information Security System

Hála, Jaroslav January 2011 (has links)
This work deals with the introduction of information security system in a company that provides internet. It is a hardware and software solutions for the benefit of quality information needed to monitor and manage networks on a professional level. Used solutions are versatile with regard to the diversity of the market and the speed of technology development.
87

Secure web applications against off-line password guessing attack : a two way password protocol with challenge response using arbitrary images

Lu, Zebin 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The web applications are now being used in many security oriented areas, including online shopping, e-commerce, which require the users to transmit sensitive information on the Internet. Therefore, to successfully authenticate each party of web applications is very important. A popular deployed technique for web authentication is the Hypertext Transfer Protocol Secure (HTTPS) protocol. However the protocol does not protect the careless users who connect to fraudulent websites from being trapped into tricks. For example, in a phishing attack, a web user who connects to an attacker may provide password to the attacker, who can use it afterwards to log in the target website and get the victim’s credentials. To prevent phishing attacks, the Two-Way Password Protocol (TPP) and Dynamic Two-Way Password Protocol (DTPP) are developed. However there still exist potential security threats in those protocols. For example, an attacker who makes a fake website may obtain the hash of users’ passwords, and use that information to arrange offline password guessing attacks. Based on TPP, we incorporated challenge responses with arbitrary images to prevent the off-line password guessing attacks in our new protocol, TPP with Challenge response using Arbitrary image (TPPCA). Besides TPPCA, we developed another scheme called Rain to solve the same problem by dividing shared secrets into several rounds of negotiations. We discussed various aspects of our protocols, the implementation and experimental results.
88

Longitudinal analysis of the certificate chains of big tech company domains / Longitudinell analys av certifikatkedjor till domäner tillhörande stora teknikföretag

Klasson, Sebastian, Lindström, Nina January 2021 (has links)
The internet is one of the most widely used mediums for communication in modern society and it has become an everyday necessity for many. It is therefore of utmost importance that it remains as secure as possible. SSL and TLS are the backbones of internet security and an integral part of these technologies are the certificates used. Certificate authorities (CAs) can issue certificates that validate that domains are who they claim to be. If a user trusts a CA they can in turn also trust domains that have been validated by them. CAs can in turn trust other CAs and this, in turn, creates a chain of trust called a certificate chain. In this thesis, the structure of these certificate chains is analysed and a longitudinal dataset is created. The analysis looks at how the certificate chains have changed over time and puts extra focus on the domains of big tech companies. The dataset created can also be used for further analysis in the future and will be a useful tool in the examination of historical certificate chains. Our findings show that the certificate chains of the domains studied do change over time; both their structure and the lengths of them vary noticeably. Most of the observed domains show a decrease in average chain length between the years of 2013 and 2020 and the structure of the chains vary significantly over the years.
89

Estimating human resilience to social engineering attacks through computer configuration data : A literature study on the state of social engineering vulnerabilities / Uppskattning av försvar motattacker som använder social manipulering genom datorkonfigurationsdata

Carlander-Reuterfelt Gallo, Matias January 2020 (has links)
Social engineering as a method of attack is increasingly becoming a problem for both corporations and individuals. From identity theft to enormous financial losses, this form of attack is notorious for affecting complex structures, yet often being very simple in its form. Whereas for other forms of cyber- attack, tools like antivirus and antimalware are now industry standard, have proven to be reliable ways to keep safe private and confidential data, there is no such equivalent for social engineering attacks. There is not, as of this day, a trustworthy and precise way of estimating resilience to these attacks, while still keeping the private data private. The purpose of this report is to compile the different aspects of a users computer data that have been proven to significantly indicative of their susceptibility to these kinds of attacks, and with them, devise a system that can, with some degree of precision, estimate the resilience to social engineering of the user. This report is a literature study on the topic of social engineering and how it relates to computer program data, configuration and personality. The different phases of research each led to a more comprehensive way of linking the different pieces of data together and devising a rudimentary way of estimating human resilience to social engineering through the observation of a few configuration aspects. For the purposes of this report, the data had to be reasonably accessible, respecting privacy, and being something that can be easily extrapolated from one user to another. Based on findings, ranging from psychological data and behavioral patterns, to network configurations, we conclude that, even though there is data that supports the possibility of estimating resilience, there is, as of this day, no empirically proven way of doing so in a precise manner. An estimation model is provided by the end of the report, but the limitations of this project did not allow for an experiment to prove its validity beyond the theories it is based upon. / Social Manipulering som attackmetod har blivit ett ökande problem både för företag och individer. Från identitetsstöld till enorma ekonomiska förluster, är denna form av attack känd för att kunna påverka komplexa system, men är ofta i sig mycket enkel i sin form. Medans andra typer av cyberattacker kan skyddas med verktyg som antivirus och antimalware och tillförlitligt hålla privat och konfidentiell information säker så finns det inga motsvarande verktyg för att skydda sig mot Social Manipulering attacker. Det finns alltså inte idag ett pålitligt och säkert sätt att motstå Social Manipulering attacker och skydda personliga uppgifter och privat data. Syftet med denna rapport är att visa olika aspekterna hur datoranvändares data är sårbarhet för dessa typer av attacker, och med dessa utforma ett system som med viss mån av precision kan mäta resiliens mot Social Manipulering. Rapporten är ett resultat av studier av litteratur inom ämnet Social Manipulering och hur den relaterar sig till datorns data, konfiguration och personuppgifter. De olika delarna av utredningen leder var och en till ett mer omfattande sätt att koppla samman de olika uppgifterna och utforma ett rudimentärt sätt att uppskatta en persons resiliens mot Social Manipulering, detta genom att observera olika aspekter av datorns konfiguration. För syftet av rapporten så har uppgifterna varit rimligt tillgängliga, har respekterat integriteten och varit något som lätt kan anpassas från en användare till en annan. Baserat på observationerna av psykologiska data, beteendemönster och nätverkskonfigurationer, så kan vi dra slutsatsen att även om det finns data som stöder möjligheten att uppskatta resiliens, finns det idag inget empiriskt bevisat sätt att göra det på ett exakt sätt. En exempel av modell för att uppskatta resiliens finns i slutet av rapporten. Ramen för detta projekt gjorde det inte möjligt att göra ett praktiskt experiment för att validera teorierna.
90

Bootstrapping Secure Sensor Networks in the Internet of Things / Konfiguration av säkra sensornätverk i sakernas internet

Edman, Johan January 2022 (has links)
The Internet of Things has become an integral part of modern society and continues to grow and evolve. The devices are expected to operate in various conditions and environments while securely transmitting sensor data and keeping low manufacturing costs. Security for the Internet of Things is still in its infancy and a serious concern. Although there are several schemes and protocols for securing communication over insecure channels, they are deemed too costly to perform on these constrained devices. As a result, substantial effort has been committed to developing secure protocols and adapting existing ones to be more lightweight. What remains seemingly absent in protocol specifications and key management schemes, however, is how to bootstrap and secure the initial communication. While it is possible to use pre-shared keys, such solutions are problematic with security and administrative overhead in mind. When the sensor networks grow in scale, with an increasing number of devices, this becomes especially problematic as autonomous deployment becomes necessary. By reviewing proposed bootstrapping techniques and evaluating suitable candidates, this work aims to provide an overview of approaches, their trade-offs and feasibility. Results of the study show that advancements in high-speed, lightweight and elliptic curve implementations have made public-key cryptography a viable option even on the very constrained platform, with session keys established within the minute. When analysing the node’s capability to generate randomness, a cornerstone of cryptographic security, initial findings indicate that it is not well equipped for the task. Consequently, sources of entropy must be evaluated thoroughly in resource-constrained devices before use and dedicated hardware for randomness might be necessary for the most constrained nodes if any security is to be guaranteed. / Sakernas internet har blivit en central del i dagens samhälle och fortsätter att utvecklas och integreras allt mer. Enheterna förväntas fungera i många typer av miljöer och förhållanden samtidigt som de ska skicka data säkert och vara billiga att producera. Trots att utvecklingen gått framåt, är säkerheten fortfarande väldigt rudimentär och i behov av ytterligare utveckling. För vanliga nätverk finns det många väletablerade protokoll för att säkra kommunikation, men dessa anses oftast vara för komplicerade för de resursbegränsade enheterna. Till följd av detta har forskning inriktats på att effektivisera existerande protokoll men även på att utveckla enklare varianter. Det som fortfarande kvarstår som ett problem och ofta inte diskuteras, är hur den initiala distributionen av kryptografiska nycklar ska genomföras. Att använda sig utav förinstallerade nycklar är en möjlighet, men det brukar oftast bli problematiskt utifrån säkerhet och administrering när sensornätverken växer i storlek. Genom att granska och utvärdera föreslagna metoder för initial konfiguration av sensornätverk, ämnar detta arbete att ge en översikt i vilka olika metoder som finns tillgängliga och deras lämplighet. Resultat från arbetet visar att tack vare framsteg inom elliptisk kurvkryptografi är publik nyckelkryptografi ett rimligt alternativ att använda, då en sessionsnyckel kan etableras inom loppet av en minut. Vid utvärdering av enheternas förmåga att generera slumptal visar initiala resultat däremot att A/D-omvandlaren inte är en lämplig källa för detta då dess entropi är låg och genererad slumpdata har en dålig fördelning och hög upprepning. Det går därför att dra slutsatsen att om någon nivå av kryptografisk säkerhet ska erhållas, så måste källor till entropi utvärderas noggrant. De resursbegränsade enheterna kan även ha ett behov av dedikerad hårdvara för att generera slumptal.

Page generated in 0.0842 seconds