• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 19
  • 14
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 251
  • 251
  • 251
  • 251
  • 127
  • 97
  • 53
  • 45
  • 40
  • 39
  • 32
  • 30
  • 28
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Assessing the reliability of digital evidence from live investigations involving encryption

Hargreaves, Christopher James January 2009 (has links)
The traditional approach to a digital investigation when a computer system is encountered in a running state is to remove the power, image the machine using a write blocker and then analyse the acquired image. This has the advantage of preserving the contents of the computer’s hard disk at that point in time. However, the disadvantage of this approach is that the preservation of the disk is at the expense of volatile data such as that stored in memory, which does not remain once the power is disconnected. There are an increasing number of situations where this traditional approach of ‘pulling the plug’ is not ideal since volatile data is relevant to the investigation; one of these situations is when the machine under investigation is using encryption. If encrypted data is encountered on a live machine, a live investigation can be performed to preserve this evidence in a form that can be later analysed. However, there are a number of difficulties with using evidence obtained from live investigations that may cause the reliability of such evidence to be questioned. This research investigates whether digital evidence obtained from live investigations involving encryption can be considered to be reliable. To determine this, a means of assessing reliability is established, which involves evaluating digital evidence against a set of criteria; evidence should be authentic, accurate and complete. This research considers how traditional digital investigations satisfy these requirements and then determines the extent to which evidence from live investigations involving encryption can satisfy the same criteria. This research concludes that it is possible for live digital evidence to be considered to be reliable, but that reliability of digital evidence ultimately depends on the specific investigation and the importance of the decision being made. However, the research provides structured criteria that allow the reliability of digital evidence to be assessed, demonstrates the use of these criteria in the context of live digital investigations involving encryption, and shows the extent to which each can currently be met.
232

Assessing the Reliability of Digital Evidence from Live Investigations Involving Encryption

Hargreaves, C J 24 November 2009 (has links)
The traditional approach to a digital investigation when a computer system is encountered in a running state is to remove the power, image the machine using a write blocker and then analyse the acquired image. This has the advantage of preserving the contents of the computer’s hard disk at that point in time. However, the disadvantage of this approach is that the preservation of the disk is at the expense of volatile data such as that stored in memory, which does not remain once the power is disconnected. There are an increasing number of situations where this traditional approach of ‘pulling the plug’ is not ideal since volatile data is relevant to the investigation; one of these situations is when the machine under investigation is using encryption. If encrypted data is encountered on a live machine, a live investigation can be performed to preserve this evidence in a form that can be later analysed. However, there are a number of difficulties with using evidence obtained from live investigations that may cause the reliability of such evidence to be questioned. This research investigates whether digital evidence obtained from live investigations involving encryption can be considered to be reliable. To determine this, a means of assessing reliability is established, which involves evaluating digital evidence against a set of criteria; evidence should be authentic, accurate and complete. This research considers how traditional digital investigations satisfy these requirements and then determines the extent to which evidence from live investigations involving encryption can satisfy the same criteria. This research concludes that it is possible for live digital evidence to be considered to be reliable, but that reliability of digital evidence ultimately depends on the specific investigation and the importance of the decision being made. However, the research provides structured criteria that allow the reliability of digital evidence to be assessed, demonstrates the use of these criteria in the context of live digital investigations involving encryption, and shows the extent to which each can currently be met.
233

Criptografia visual : método de alinhamento automático de parcelas utilizando dispositivos móveis / Visual cryptography : automatic alignment method using mobile devices

Pietz, Franz, 1983- 12 November 2014 (has links)
Orientador: Julio Cesar López Hernández / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-27T12:14:05Z (GMT). No. of bitstreams: 1 Pietz_Franz_M.pdf: 27442530 bytes, checksum: 1648252389eb63cf26ca0525be124bda (MD5) Previous issue date: 2014 / Resumo: A criptografia visual é um método de compartilhamento de segredos proposto por Naor em Shamir no artigo ''Criptografia Visual'' de 1994. Nele, uma imagem secreta é dividida em um conjunto de parcelas, sendo necessário sobrepor um número mínimo de parcelas para decodificarmos o segredo visualmente, sem nenhum tipo de dispositivo ou cálculo criptográfico; e analisando as parcelas isoladamente, não é possível recuperar nenhuma informação sobre a imagem secreta original. O esquema é considerado seguro e pode ser comparado com as cifras de one-time-pad, também chamadas de cifras perfeitas, devido à dificuldade do atacante obter o segredo ou parte dele. Existem propostas para a utilização de criptografia visual em protocolos de autenticação, como autenticação de transações bancárias e verificação de legitimidade de produtos. Entretanto, esse método possui problemas como definição do segredo recuperado, baixo contraste e desvios de alinhamento, que é o problema mais sensível. Nossa proposta mostra como utilizar um dispositivo móvel, como smartphone ou tablet, para realizar o alinhamento automático de parcelas e auxiliar o usuário no processo de recuperação de segredos encriptados utilizando criptografia visual. Para isso, utilizamos a câmera do dispositivo móvel para torná-lo uma ''transparência'' e técnicas de análise de imagens para localizar uma parcela exibida em um monitor ou impressa na embalagem de um produto, e sobrepô-la com uma parcela presente no dispositivo móvel, permitindo a visualização do segredo recuperado na tela do dispositivo. A utilização de um dispositivo móvel traz vantagens imediatas, como facilidade para a entrega de parcelas no momento da transação, sem necessidade de guardar informação previamente / Abstract: Visual cryptography is a secret sharing method proposed by Naor and Shamir in the paper ''Visual Cryptography'', in 1994. It split a secret image into a set of shares, so that we need to stack a minimum number of shares to visually decode the secret image without the help of hardware or computation, and analyzing the shares alone is not possible to obtain any information about the secret image. The scheme is considered safe and can be compared to the one-time-pad cyphers, also called perfect cyphers, due to the difficulty of an attacker to obtain the secret or part of it. There are proposals to use visual cryptography in authentication protocols, such as in bank transactions and product's legitimacy verification. But these methods have problems with recovered secret's definition, low contrast and misalignment of the shares, which is the most sensitive. Our proposal shows how to use a mobile device, such as smartphone or tablet, to perform automatic alignment of the shares and to assist a user to recover a secret encrypted using visual cryptography. For this, we use the device camera to turn it into a ''transparency'' and image analysis techniques to locate a share that can be displayed on a monitor or printed on the packaging of a product, and overlay it with a second share present on the mobile device, allowing the visualization of the recovered secret on the device's display. Using a mobile device brings immediate advantages, such as easy delivery of shares at the transaction's time, without having to store information in advance / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
234

An analysis and a comparative study of cryptographic algorithms used on the internet of things (IoT) based on avalanche effect

Muthavhine, Khumbelo Difference 07 1900 (has links)
Ubiquitous computing is already weaving itself around us and it is connecting everything to the network of networks. This interconnection of objects to the internet is new computing paradigm called the Internet of Things (IoT) networks. Many capacity and non-capacity constrained devices, such as sensors are connecting to the Internet. These devices interact with each other through the network and provide a new experience to its users. In order to make full use of this ubiquitous paradigm, security on IoT is important. There are problems with privacy concerns regarding certain algorithms that are on IoT, particularly in the area that relates to their avalanche effect means that a small change in the plaintext or key should create a significant change in the ciphertext. The higher the significant change, the higher the security if that algorithm. If the avalanche effect of an algorithm is less than 50% then that algorithm is weak and can create security undesirability in any network. In this, case IoT. In this study, we propose to do the following: (1) Search and select existing block cryptographic algorithms (maximum of ten) used for authentication and encryption from different devices used on IoT. (2) Analyse the avalanche effect of select cryptographic algorithms and determine if they give efficient authentication on IoT. (3) Improve their avalanche effect by designing a mathematical model that improves their robustness against attacks. This is done through the usage of the initial vector XORed with plaintext and final vector XORed with cipher tect. (4) Test the new mathematical model for any enhancement on the avalanche effect of each algorithm as stated in the preceding sentences. (5) Propose future work on how to enhance security on IoT. Results show that when using the proposed method with variation of key, the avalanche effect significantly improved for seven out of ten algorithms. This means that we have managed to improve 70% of algorithms tested. Therefore indicating a substantial success rate for the proposed method as far as the avalanche effect is concerned. We propose that the seven algorithms be replaced by our improved versions in each of their implementation on IoT whenever the plaintext is varied. / Electrical and Mining Engineering / M. Tech. (Electrical Engineering)
235

Universal homophonic coding

Stevens, Charles Cater 11 1900 (has links)
Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics. / Computer Science / M.Sc. (Computer Science)
236

Session hijacking attacks in wireless local area networks

Onder, Hulusi 03 1900 (has links)
Approved for public release, distribution is unlimited / Wireless Local Area Network (WLAN) technologies are becoming widely used since they provide more flexibility and availability. Unfortunately, it is possible for WLANs to be implemented with security flaws which are not addressed in the original 802.11 specification. IEEE formed a working group (TGi) to provide a complete solution (code named 802.11i standard) to all the security problems of the WLANs. The group proposed using 802.1X as an interim solution to the deficiencies in WLAN authentication and key management. The full 802.11i standard is expected to be finalized by the end of 2004. Although 802.1X provides a better authentication scheme than the original 802.11 security solution, it is still vulnerable to denial-of-service, session hijacking, and man-in-the- middle attacks. Using an open-source 802.1X test-bed, this thesis evaluates various session hijacking mechanisms through experimentation. The main conclusion is that the risk of session hijacking attack is significantly reduced with the new security standard (802.11i); however, the new standard will not resolve all of the problems. An attempt to launch a session hijacking attack against the new security standard will not succeed, although it will result in a denial-of-service attack against the user. / Lieutenant Junior Grade, Turkish Navy
237

Uma API criptográfica para aplicações embarcadas / A cryptographic API for embedded applications

Fontoura, Felipe Michels 31 August 2016 (has links)
Neste documento, está apresentada a GEmSysC, uma API criptográfica unificada para aplicações embarcadas. Camadas de abstração compatíveis com esta API podem ser construídas sobre bibliotecas existentes, de forma que as funcionalidades criptográficas podem ser acessadas pelo software de alto nível de forma consistente e independente da implementação. As características da API foram definidas com base em boas práticas de construção de APIs, práticas indicadas em software embarcado e também com base em outras bibliotecas e padrões criptográficos existentes. A principal inspiração para este projeto foi o padrão CMSIS-RTOS, que também busca unificar interfaces para software embarcado de forma independente da implementação, mas é voltado a sistemas operacionais, não a funcionalidades criptográficas. A estrutura da GEmSysC é modular, sendo composta de um core genérico e módulos acopláveis, um para cada algoritmo criptográfico. Nesta dissertação, está apresentada a especificação do core e de três módulos: AES, RSA e SHA-256. Ainda que a GEmSysC tenha sido elaborada para utilização em sistemas embarcados, ela também poderia ser utilizada em computadores computacionais, já que, em última instância, sistemas embarcados são sistemas computacionais. Como provas de conceito, foram feitas duas implementações da GEmSysC: uma sobre a biblioteca wolfSSL, que é de código aberto e voltada a sistemas embarcados, e outra sobre a OpenSSL, que é amplamente utilizada e de código aberto, mas não é voltada especificamente a sistemas embarcados. A primeira implementação foi testada em um processador Cortex-M3 sem sistema operacional, enquanto a segunda foi testada em um PC com sistema operacional Windows 10. Mostrou-se que a GEmSysC é, sob alguns aspectos, mais simples que outras bibliotecas. Mostrou-se também que o overhead da camada de abstração é pequeno, ficando entre pouco mais de 0% e 0,17% na implementação voltada a sistemas embarcados e entre 0,03% e 1,40% na implementação para PC. Apresentaram-se ainda os valores dos custos de memória de programa e de RAM de cada uma das implementações. / This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.
238

Universal homophonic coding

Stevens, Charles Cater 11 1900 (has links)
Redundancy in plaintext is a fertile source of attack in any encryption system. Compression before encryption reduces the redundancy in the plaintext, but this does not make a cipher more secure. The cipher text is still susceptible to known-plaintext and chosen-plaintext attacks. The aim of homophonic coding is to convert a plaintext source into a random sequence by randomly mapping each source symbol into one of a set of homophones. Each homophone is then encoded by a source coder after which it can be encrypted with a cryptographic system. The security of homophonic coding falls into the class of unconditionally secure ciphers. The main advantage of homophonic coding over pure source coding is that it provides security both against known-plaintext and chosen-plaintext attacks, whereas source coding merely protects against a ciphertext-only attack. The aim of this dissertation is to investigate the implementation of an adaptive homophonic coder based on an arithmetic coder. This type of homophonic coding is termed universal, as it is not dependent on the source statistics. / Computer Science / M.Sc. (Computer Science)
239

Automated image classification via unsupervised feature learning by K-means

Karimy Dehkordy, Hossein 09 July 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Research on image classification has grown rapidly in the field of machine learning. Many methods have already been implemented for image classification. Among all these methods, best results have been reported by neural network-based techniques. One of the most important steps in automated image classification is feature extraction. Feature extraction includes two parts: feature construction and feature selection. Many methods for feature extraction exist, but the best ones are related to deep-learning approaches such as network-in-network or deep convolutional network algorithms. Deep learning tries to focus on the level of abstraction and find higher levels of abstraction from the previous level by having multiple layers of hidden layers. The two main problems with using deep-learning approaches are the speed and the number of parameters that should be configured. Small changes or poor selection of parameters can alter the results completely or even make them worse. Tuning these parameters is usually impossible for normal users who do not have super computers because one should run the algorithm and try to tune the parameters according to the results obtained. Thus, this process can be very time consuming. This thesis attempts to address the speed and configuration issues found with traditional deep-network approaches. Some of the traditional methods of unsupervised learning are used to build an automated image-classification approach that takes less time both to configure and to run.
240

A comparative review of legislative reform of electronic contract formation in South Africa

Mtuze, Sizwe Lindelo Snail ka 02 1900 (has links)
Electronic contracts in the new technological age and electronic commerce have brought about world-wide legal uncertainty. When compared to the traditional paper-based method of writing and signing, the question has arisen whether contracts concluded by electronic means should be recognised as valid and enforceable agreements in terms of the functional equivalence approach. This study will examine the law regulating e-commerce from a South African perspective in contrast to international trends and e-commerce law from the perspective of the United States. The research investigates various aspects of contract formation such as time and place, validity of electronic agreements, electronic signatures, attribution of electronic data messages and signatures, automated transaction as well as select aspects of e-jurisdiction from a South African and United States viewpoint. / Mercantile Law / LLM

Page generated in 0.1123 seconds