• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 463
  • 55
  • 44
  • 37
  • 25
  • 24
  • 14
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 812
  • 409
  • 351
  • 321
  • 294
  • 232
  • 93
  • 92
  • 91
  • 85
  • 83
  • 78
  • 76
  • 73
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Utilizing graphics processing units in cryptographic applications.

January 2006 (has links)
Fleissner Sebastian. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 91-95). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Legend of Hercules --- p.1 / Chapter 1.2 --- Background --- p.2 / Chapter 1.3 --- Research Purpose --- p.2 / Chapter 1.4 --- Research Overview --- p.3 / Chapter 1.5 --- Thesis Organization --- p.4 / Chapter 2 --- Background and Definitions --- p.6 / Chapter 2.1 --- General Purpose GPU Computing --- p.6 / Chapter 2.1.1 --- Four Generations of GPU Hardware --- p.6 / Chapter 2.1.2 --- GPU Architecture & Terms --- p.7 / Chapter 2.1.3 --- General Purpose GPU Programming --- p.9 / Chapter 2.1.4 --- Shader Programming Languages --- p.12 / Chapter 2.2 --- Cryptography Overview --- p.13 / Chapter 2.2.1 --- "Alice, Bob, and Friends" --- p.14 / Chapter 2.2.2 --- Cryptographic Hash Functions --- p.14 / Chapter 2.2.3 --- Secret Key Ciphers --- p.15 / Chapter 2.2.4 --- Public Key Encryption --- p.16 / Chapter 2.2.5 --- Digital Signatures --- p.17 / Chapter 2.3 --- The Montgomery Method --- p.18 / Chapter 2.3.1 --- Pre-computation Step --- p.19 / Chapter 2.3.2 --- Obtaining the Montgomery Representation --- p.19 / Chapter 2.3.3 --- Calculating the Montgomery Product(s) --- p.19 / Chapter 2.3.4 --- Calculating final result --- p.20 / Chapter 2.3.5 --- The Montgomery Exponentiation Algorithm . . --- p.20 / Chapter 2.4 --- Elliptic Curve Cryptography --- p.21 / Chapter 2.4.1 --- Introduction --- p.21 / Chapter 2.4.2 --- Recommended Elliptic Curves --- p.22 / Chapter 2.4.3 --- Coordinate Systems --- p.23 / Chapter 2.4.4 --- Point Doubling --- p.23 / Chapter 2.4.5 --- Point Addition --- p.24 / Chapter 2.4.6 --- Double and Add --- p.25 / Chapter 2.4.7 --- Elliptic Curve Encryption --- p.26 / Chapter 2.5 --- Related Research --- p.28 / Chapter 2.5.1 --- Secret Key Cryptography on GPUs --- p.28 / Chapter 2.5.2 --- Remotely Keyed Cryptographics --- p.29 / Chapter 3 --- Proposed Algorithms --- p.30 / Chapter 3.1 --- Introduction --- p.30 / Chapter 3.2 --- Chapter Organization --- p.31 / Chapter 3.3 --- Algorithm Design Issues --- p.31 / Chapter 3.3.1 --- Arithmetic Density and GPU Memory Access . --- p.31 / Chapter 3.3.2 --- Encoding Large Integers with Floating Point Numbers --- p.33 / Chapter 3.4 --- GPU Montgomery Algorithms --- p.34 / Chapter 3.4.1 --- Introduction --- p.34 / Chapter 3.4.2 --- GPU-FlexM-Prod Specification --- p.37 / Chapter 3.4.3 --- GPU-FlexM-Mul Specification --- p.43 / Chapter 3.4.4 --- GPU-FlexM-Exp Specification --- p.45 / Chapter 3.4.5 --- GPU-FixM-Prod Specification --- p.46 / Chapter 3.4.6 --- GPU-FixM-Mul Specification --- p.50 / Chapter 3.4.7 --- GPU-FixM-Exp Specification --- p.52 / Chapter 3.5 --- GPU Elliptic Curve Algorithms --- p.54 / Chapter 3.5.1 --- GPU-EC-Double Specification --- p.55 / Chapter 3.5.2 --- GPU-EC-Add Specification --- p.59 / Chapter 3.5.3 --- GPU-EC-DoubleAdd Specification --- p.64 / Chapter 4 --- Analysis of Proposed Algorithms --- p.67 / Chapter 4.1 --- Performance Analysis --- p.67 / Chapter 4.1.1 --- GPU-FlexM Algorithms --- p.69 / Chapter 4.1.2 --- GPU-FixM Algorithms --- p.72 / Chapter 4.1.3 --- GPU-EC Algorithms --- p.77 / Chapter 4.1.4 --- Summary --- p.82 / Chapter 4.2 --- Usability of Proposed Algorithms --- p.83 / Chapter 4.2.1 --- Signcryption --- p.84 / Chapter 4.2.2 --- Pure Asymmetric Encryption and Decryption --- p.85 / Chapter 4.2.3 --- Simultaneous Signing of Multiple Messages --- p.86 / Chapter 4.2.4 --- Relieving the Main Processor --- p.87 / Chapter 5 --- Conclusions --- p.88 / Chapter 5.1 --- Research Results --- p.88 / Chapter 5.2 --- Future Research --- p.89 / Bibliography --- p.91
342

Data Sharing on Untrusted Storage with Attribute-Based Encryption

Yu, Shucheng 13 July 2010 (has links)
"Storing data on untrusted storage makes secure data sharing a challenge issue. On one hand, data access policies should be enforced on these storage servers; on the other hand, confidentiality of sensitive data should be well protected against them. Cryptographic methods are usually applied to address this issue -- only encrypted data are stored on storage servers while retaining secret key(s) to the data owner herself; user access is granted by issuing the corresponding data decryption keys. The main challenges for cryptographic methods include simultaneously achieving system scalability and fine-grained data access control, efficient key/user management, user accountability and etc. To address these challenge issues, this dissertation studies and enhances a novel public-key cryptography -- attribute-based encryption (ABE), and applies it for fine-grained data access control on untrusted storage. The first part of this dissertation discusses the necessity of applying ABE to secure data sharing on untrusted storage and addresses several security issues for ABE. More specifically, we propose three enhancement schemes for ABE: In the first enhancement scheme, we focus on how to revoke users in ABE with the help of untrusted servers. In this work, we enable the data owner to delegate most computation-intensive tasks pertained to user revocation to untrusted servers without disclosing data content to them. In the second enhancement scheme, we address key abuse attacks in ABE, in which authorized but malicious users abuse their access privileges by sharing their decryption keys with unauthorized users. Our proposed scheme makes it possible for the data owner to efficiently disclose the original key owner's identity merely by checking the input and output of a suspicious user's decryption device. Our third enhancement schemes study the issue of privacy preservation in ABE. Specifically, our proposed schemes hide the data owner's access policy not only to the untrusted servers but also to all the users. The second part presents our ABE-based secure data sharing solutions for two specific applications -- Cloud Computing and Wireless Sensor Networks (WSNs). In Cloud Computing cloud servers are usually operated by third-party providers, which are almost certain to be outside the trust domain of cloud users. To secure data storage and sharing for cloud users, our proposed scheme lets the data owner (also a cloud user) generate her own ABE keys for data encryption and take the full control on key distribution/revocation. The main challenge in this work is to make the computation load affordable to the data owner and data consumers (both are cloud users). We address this challenge by uniquely combining various computation delegation techniques with ABE and allow both the data owner and data consumers to securely mitigate most computation-intensive tasks to cloud servers which are envisaged to have unlimited resources. In WSNs, wireless sensor nodes are often unattendedly deployed in the field and vulnerable to strong attacks such as memory breach. For securing storage and sharing of data on distributed storage sensor nodes while retaining data confidentiality, sensor nodes encrypt their collected data using ABE public keys and store encrypted data on storage nodes. Authorized users are given corresponding decryption keys to read data. The main challenge in this case is that sensor nodes are extremely resource-constrained and can just afford limited computation/communication load. Taking this into account we divide the lifetime of sensor nodes into phases and distribute the computation tasks into each phase. We also revised the original ABE scheme to make the overhead pertained to user revocation minimal for sensor nodes. Feasibility of the scheme is demonstrated by experiments on real sensor platforms. "
343

Efficient Elliptic Curve Processor Architectures for Field Programmable Logic

Orlando, Gerardo 27 March 2002 (has links)
Elliptic curve cryptosystems offer security comparable to that of traditional asymmetric cryptosystems, such as those based on the RSA encryption and digital signature algorithms, with smaller keys and computationally more efficient algorithms. The ability to use smaller keys and computationally more efficient algorithms than traditional asymmetric cryptographic algorithms are two of the main reasons why elliptic curve cryptography has become popular. As the popularity of elliptic curve cryptography increases, the need for efficient hardware solutions that accelerate the computation of elliptic curve point multiplications also increases. This dissertation introduces elliptic curve processor architectures suitable for the computation of point multiplications for curves defined over fields GF(2^m) and curves defined over fields GF(p). Each of the processor architectures presented here allows designers to tailor the performance and hardware requirements according to their performance and cost goals. Moreover, these architectures are well suited for implementation in modern field programmable gate arrays (FPGAs). This point was proved with prototyped implementations. The fastest prototyped GF(2^m) processor can compute an arbitrary point multiplication for curves defined over fields GF(2^167) in 0.21 milliseconds and the prototyped processor for the field GF(2^192-2^64-1) is capable of computing a point multiplication in about 3.6 milliseconds. The most critical component of an elliptic curve processor is its arithmetic unit. A typical arithmetic unit includes an adder/subtractor, a multiplier, and possibly a squarer. Some of the architectures presented in this work are based on multiplier and squarer architectures developed as part of the work presented in this dissertation. The GF(2^m) least significant bit super-serial multiplier architecture, the GF(2^m) most significant bit super-serial multiplier architecture, and a new GF(p) Montgomery multiplier architecture were developed as part of this work together with a new squaring architecture for GF(2^m).
344

Secure Computation Towards Practical Applications

Krell Loy, Fernando January 2016 (has links)
Secure multi-party computation (MPC) is a central area of research in cryptography. Its goal is to allow a set of players to jointly compute a function on their inputs while protecting and preserving the privacy of each player's input. Motivated by the huge growth of data available and the rise of global privacy concerns of entities using this data, we study the feasibility of using secure computation techniques on large scale data sets to address these concerns. An important limitation of generic secure computation protocols is that they require at least linear time complexity. This seems to rule out applications involving big amounts of data. On the other hand, specific applications may have particular properties that allow for ad-hoc secure protocols overcoming the linear time barrier. In addition, in some settings the full level of security guaranteed by MPC protocols may not be required, and some controlled amount of privacy leakage can be acceptable. Towards this end, we first take a theoretical point of view, and study whether sublinear time RAM programs can be computed securely with sublinear time complexity in the two party setting. We then take a more practical approach, and study the specific scenario of private database querying, where both the server's data and the client's query need to be protected. In this last setting we provide two private database management systems achieving different levels of efficiency, functionality, and security. These three results provide an overview of this three-dimensional trade-off space. For the above systems, we describe formal security definitions and stablish mathematical proofs of security. We also take a practical approach roviding an implementation of the systems and experimental analysis of their efficiency.
345

Understanding Flaws in the Deployment and Implementation of Web Encryption

Sivakorn, Suphannee January 2018 (has links)
In recent years, the web has switched from using the unencrypted HTTP protocol to using encrypted communications. Primarily, this resulted in increasing deployment of TLS to mitigate information leakage over the network. This development has led many web service operators to mistakenly think that migrating from HTTP to HTTPS will magically protect them from information leakage without any additional effort on their end to guar- antee the desired security properties. In reality, despite the fact that there exists enough infrastructure in place and the protocols have been “tested” (by virtue of being in wide, but not ubiquitous, use for many years), deploying HTTPS is a highly challenging task due to the technical complexity of its underlying protocols (i.e., HTTP, TLS) as well as the complexity of the TLS certificate ecosystem and this of popular client applications such as web browsers. For example, we found that many websites still avoid ubiquitous encryption and force only critical functionality and sensitive data access over encrypted connections while allowing more innocuous functionality to be accessed over HTTP. In practice, this approach is prone to flaws that can expose sensitive information or functionality to third parties. Thus, it is crucial for developers to verify the correctness of their deployments and implementations. In this dissertation, in an effort to improve users’ privacy, we highlight semantic flaws in the implementations of both web servers and clients, caused by the improper deployment of web encryption protocols. First, we conduct an in-depth assessment of major websites and explore what functionality and information is exposed to attackers that have hijacked a user’s HTTP cookies. We identify a recurring pattern across websites with partially de- ployed HTTPS, namely, that service personalization inadvertently results in the exposure of private information. The separation of functionality across multiple cookies with different scopes and inter-dependencies further complicates matters, as imprecise access control renders restricted account functionality accessible to non-secure cookies. Our cookie hijacking study reveals a number of severe flaws; for example, attackers can obtain the user’s saved address and visited websites from e.g., Google, Bing, and Yahoo allow attackers to extract the contact list and send emails from the user’s account. To estimate the extent of the threat, we run measurements on a university public wireless network for a period of 30 days and detect over 282K accounts exposing the cookies required for our hijacking attacks. Next, we explore and study security mechanisms purposed to eliminate this problem by enforcing encryption such as HSTS and HTTPS Everywhere. We evaluate each mechanism in terms of its adoption and effectiveness. We find that all mechanisms suffer from implementation flaws or deployment issues and argue that, as long as servers continue to not support ubiquitous encryption across their entire domain, no mechanism can effectively protect users from cookie hijacking and information leakage. Finally, as the security guarantees of TLS (in turn HTTPS), are critically dependent on the correct validation of X.509 server certificates, we study hostname verification, a critical component in the certificate validation process. We develop HVLearn, a novel testing framework to verify the correctness of hostname verification implementations and use HVLearn to analyze a number of popular TLS libraries and applications. To this end, we found 8 unique violations of the RFC specifications. Several of these violations are critical and can render the affected implementations vulnerable to man-in-the-middle attacks.
346

Achieving secure and efficient access control of personal health records in a storage cloud

Binbusayyis, Adel January 2017 (has links)
A personal health record (PHR) contains health data about a patient, which is maintained by the patient. Patients may share their PHR data with a wide range of users such as healthcare providers and researchers through the use of a third party such as a cloud service provider. To protect the confidentiality of the data and to facilitate access by authorized users, patients use Attribute-Based Encryption (ABE) to encrypt the data before uploading it onto the cloud servers. With ABE, an access policy is defined based on users' attributes such as a doctor in a particular hospital, or a researcher in a particular university, and the encrypted data can only be decrypted if and only if a user's attributes comply with the access policy attached to a data object. Our critical analysis of the related work in the literature shows that existing ABE based access control frameworks used for sharing PHRs in a storage cloud can be enhanced in terms of scalability and security. With regard to scalability, most existing ABE based access control frameworks rely on the use of a single attribute authority to manage all users, making the attribute authority into a potential bottleneck regarding performance and security. With regard to security, the existing ABE based access control frameworks assume that all users have the same level of trust (i.e. they are equally trustworthy) and all PHR data files have the same sensitivity level, which means that the same protection level is provided. However, in our analysis of the problem context, we have observed that this assumption may not always be valid. Some data, such as patients' personal details and certain diseases, is more sensitive than other data, such as anonymised data. Access to more sensitive data should be governed by more stringent access control measures. This thesis presents our work in rectifying the two limitations highlighted above. In doing so, we have made two novel contributions. The first is the design and evaluation of a Hierarchical Attribute-Based Encryption (HABE) framework for sharing PHRs in a storage cloud. The HABE framework can spread the key management overheads imposed on a single attribute authority tasked with the management of all the users into multiple attribute authorities. This is achieved by (1) classifying users into different groups (called domains) such as healthcare, education, etc., (2) making use of multiple attribute authorities in each domain, (3) structuring the multiple attribute authorities in each domain in a hierarchical manner, and (4) allowing each attribute authority to be responsible for managing particular users in a specific domain, e.g. a hospital or a university. The HABE framework has been analyzed and evaluated in term of security and performance. The security analysis demonstrates that the HABE framework is resistant to a host of security attacks including user collusions. The performance has been analyzed in terms of computational and communication overheads and the results show that the HABE framework is more efficient and scalable than the most relevant comparable work. The second novel contribution is the design and evaluation of a Trust-Aware HABE (Trust+HABE) framework, which is an extension of the HABE framework. This framework is also intended for sharing PHRs in a storage cloud. The Trust+HABE framework is designed to enhance security in terms of protecting access to sensitive PHR data while keeping the overhead costs as low as possible. The idea used here is that we classify PHR data into different groups, each with a distinctive sensitivity level. A user requesting data from a particular group (with a given sensitivity level) must demonstrate that his/her trust level is not lower than the data sensitivity level (i.e. trust value vs data sensitivity verification). A user's trust level is derived based on a number of trust-affecting factors, such as his/her behaviour history and the authentication token type used to identify him/herself etc. For accessing data at the highest sensitivity level, users are required to get special permissions from the data owners (i.e. the patients who own the data), in addition to trust value vs data sensitivity verification. In this way, the framework not only adapts its protection level (in imposing access control) in response to the data sensitivity levels, but also provides patients with more fine-grained access control to their PHR data. The Trust+HABE framework is also analysed and evaluated in term of security and performance. The performance results from the Trust+HABE framework are compared against the HABE framework. The comparison shows that the additional computational, communication, and access delay costs introduced as the result of using the trust-aware approach to access control in this context are not significant compared with computational, communication, and access delay costs of the HABE framework.
347

An algebraic attack on block ciphers

Unknown Date (has links)
The aim of this work is to investigate an algebraic attack on block ciphers called Multiple Right Hand Sides (MRHS). MRHS models a block cipher as a system of n matrix equations Si := Aix = [Li], where each Li can be expressed as a set of its columns bi1, . . . , bisi . The set of solutions Ti of Si is dened as the union of the solutions of Aix = bij , and the set of solutions of the system S1, . . . , Sn is dened as the intersection of T1, . . . , Tn. Our main contribution is a hardware platform which implements a particular algorithm that solves MRHS systems (and hence block ciphers). The case is made that the platform performs several thousand orders of magnitude faster than software, it costs less than US$1,000,000, and that actual times of block cipher breakage can be calculated once it is known how the corresponding software behaves. Options in MRHS are also explored with a view to increase its efficiency. / by Kenneth Matheis. / Thesis (M.S.C.S.)--Florida Atlantic University, 2010. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2010. Mode of access: World Wide Web.
348

Quantum cryptography and applications in the optical fiber network. / CUHK electronic theses & dissertations collection / Digital dissertation consortium

January 2005 (has links)
In this thesis research, a novel scheme to implement quantum key distribution based on multiphoton entanglement with a new protocol is proposed. Its advantages are: a larger information capacity can be obtained with a longer transmission distance and the detection of multiple photons is easier than that of a single photon. The security and attacks pertaining to such a system are also studied. / Lastly, a quantum random number generator based on quantum optics has been experimentally demonstrated. This device is a key component for quantum key distribution as it can create truly random numbers, which is an essential requirement to perform quantum key distribution. This new generator is composed of a single optical fiber coupler with fiber pigtails, which can be easily used in optical fiber communications. / Next, a quantum key distribution over wavelength division multiplexed (WDM) optical fiber networks is realized. Quantum key distribution in networks is a long-standing problem for practical applications. Here we combine quantum cryptography and WDM to solve this problem because WDM technology is universally deployed in the current and next generation fiber networks. The ultimate target is to deploy quantum key distribution over commercial networks. The problems arising from the networks are also studied in this part. / Quantum cryptography, as part of quantum information and communications, can provide absolute security for information transmission because it is established on the fundamental laws of quantum theory, such as the principle of uncertainty, No-cloning theorem and quantum entanglement. / Then quantum key distribution in multi-access networks using wavelength routing technology is investigated in this research. For the first time, quantum cryptography for multiple individually targeted users has been successfully implemented in sharp contrast to that using the indiscriminating broadcasting structure. It overcomes the shortcoming that every user in the network can acquire the quantum key signals intended to be exchanged between only two users. Furthermore, a more efficient scheme of quantum key distribution is adopted, hence resulting in a higher key rate. / Luo, Yuhui. / "January 2005." / Adviser: K. T. Chan. / Source: Dissertation Abstracts International, Volume: 67-01, Section: B, page: 0338. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references. / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
349

Asymmetric reversible parametric sequences approach to design a multi-key secure multimedia proxy: theory, design and implementation.

January 2003 (has links)
Yeung Siu Fung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 52-53). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Multi-Key Encryption Theory --- p.7 / Chapter 2.1 --- Reversible Parametric Sequence --- p.7 / Chapter 2.2 --- Implementation of ARPSf --- p.11 / Chapter 3 --- Multimedia Proxy: Architectures and Protocols --- p.16 / Chapter 3.1 --- Operations to Request and Cache Data from the Server --- p.16 / Chapter 3.2 --- Operations to Request Cached Data from the Multimedia Proxy --- p.18 / Chapter 3.3 --- Encryption Configuration Parameters (ECP) --- p.19 / Chapter 4 --- Extension to multi-level proxy --- p.24 / Chapter 5 --- Secure Multimedia Library (SML) --- p.27 / Chapter 5.1 --- Proxy Pre-fetches and Caches Data --- p.27 / Chapter 5.2 --- Client Requests Cached Data From the Proxy --- p.29 / Chapter 6 --- Implementation Results --- p.31 / Chapter 7 --- Related Work --- p.40 / Chapter 8 --- Conclusion --- p.42 / Chapter A --- Function Prototypes of Secure Multimedia Library (SML) --- p.44 / Chapter A.1 --- CONNECTION AND AUTHENTICATION --- p.44 / Chapter A.1.1 --- Create SML Session --- p.44 / Chapter A.1.2 --- Public Key Manipulation --- p.44 / Chapter A.1.3 --- Authentication --- p.45 / Chapter A.1.4 --- Connect and Accept --- p.46 / Chapter A.1.5 --- Close Connection --- p.47 / Chapter A.2 --- SECURE DATA TRANSMISSION --- p.47 / Chapter A.2.1 --- Asymmetric Reversible Parametric Sequence and En- cryption Configuration Parameters --- p.47 / Chapter A.2.2 --- Bulk Data Encryption and Decryption --- p.48 / Chapter A.2.3 --- Entire Data Encryption and Decryption --- p.49 / Chapter A.3 --- Secure Proxy Architecture --- p.49 / Chapter A.3.1 --- Proxy-Server Connection --- p.49 / Chapter A.3.2 --- ARPS and ECP --- p.49 / Chapter A.3.3 --- Initial Sever Encryption --- p.50 / Chapter A.3.4 --- Proxy Re-Encryption --- p.51 / Chapter A.3.5 --- Client Decryption --- p.51 / Bibliography --- p.52
350

Criptografia RSA: a teoria dos nÃmeros posta em prÃtica / RSA encryption: number theory put into practice

Lana Priscila Souza 11 June 2015 (has links)
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior / Desde o advento da escrita, o envio de mensagens secretas tem sido uma importante maneira de guardar sigilo de informaÃÃes confidenciais. A arte de elaborar mensagens a partir de cÃdigos secretos surge na figura da criptografia que, com o passar do tempo, estende os seus serviÃos Ãs transaÃÃes comerciais realizadas pela internet. O principal algoritmo utilizado pela internet recebe o nome de RSA. Assim, a criptografia RSA codifica nÃmeros de cartÃes de crÃditos, senhas de bancos, nÃmeros de contas e utiliza para isso elementos de uma importante Ãrea da MatemÃtica: a Teoria dos NÃmeros. / Since the advent of writing, sending secret messages has been an important way to maintain confidentiality of sensitive information. The art of crafting messages from secret codes appears in the figure of encryption that over time extends its services to commercial transactions over the Internet. The main algorithm used by the internet is called RSA. Thus, the RSA Encryption encodes credit card numbers, bank passwords, account numbers and uses for that elements of an important area of mathematics: number theory.

Page generated in 0.0626 seconds