• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 11
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 121
  • 121
  • 45
  • 36
  • 36
  • 18
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

An Investigation of Methods to Improve Area and Performance of Hardware Implementations of a Lattice Based Cryptosystem

Beckwith, Luke Parkhurst 05 November 2020 (has links)
With continuing research into quantum computing, current public key cryptographic algorithms such as RSA and ECC will become insecure. These algorithms are based on the difficulty of integer factorization or discrete logarithm problems, which are difficult to solve on classical computers but become easy with quantum computers. Because of this threat, government and industry are investigating new public key standards, based on mathematical assumptions that remain secure under quantum computing. This paper investigates methods of improving the area and performance of one of the proposed algorithms for key exchanges, "NewHope." We describe a pipelined FPGA implementation of NewHope512cpa which dramatically increases the throughput for a similar design area. Our pipelined encryption implementation achieves 652.2 Mbps and a 0.088 Mbps/LUT throughput-to-area (TPA) ratio, which are the best known results to date, and achieves an energy efficiency of 0.94 nJ/bit. This represents TPA and energy efficiency improvements of 10.05× and 8.58×, respectively, over a non-pipelined approach. Additionally, we investigate replacing the large SHAKE XOF (hash) function with a lightweight Trivium based PRNG, which reduces the area by 32% and improves energy efficiency by 30% for the pipelined encryption implementation, and which could be considered for future cipher specifications. / Master of Science / Cryptography is prevalent in almost every aspect of our lives. It is used to protect communication, banking information, and online transactions. Current cryptographic protections are built specifically upon public key encryption, which allows two people who have never communicated before to setup a secure communication channel. However, due to the nature of current cryptographic algorithms, the development of quantum computers will make it possible to break the algorithms that secure our communications. Because of this threat, new algorithms based on principles that stand up to quantum computing are being investigated to find a suitable alternative to secure our systems. These algorithms will need to be efficient in order to keep up with the demands of the ever growing internet. This paper investigates four hardware implementations of a proposed quantum-secure algorithm to explore ways to make designs more efficient. The improvements are valuable for high throughput applications, such as a server which must handle a large number of connections at once.
92

Analyse d'accumulateurs d'entropie pour les générateurs aléatoires cryptographiques / Analysis of cryptographic random number generator and postprocessing

Julis, Guenaëlle de 18 December 2014 (has links)
En cryptographie, l'utilisation de nombres aléatoires est fréquente (graine, token, ...) et une mauvaise génération d'aléa peut compromettre toute la sécurité d'un protocole, comme en témoigne régulièrement l'actualité. Les générateurs de nombres aléatoires à usage cryptographique sont des composants formés de trois modules : la source brute qui produit de l'aléa (un algorithme ou un phénomène physique), un retraitement pour corriger les défauts de la source, et un retraitement cryptographique pour obtenir l'aléa final. Cette thèse se focalise sur l'analyse des générateurs issus d'une source physique, en vue de dégager des retraitements adaptés à leurs propriétés et résistants à des perturbations de leur environnement d'utilisation. La complexité des dispositifs entravant souvent la formulation explicite d'un modèle stochastique prouvé, leur évaluation repose principalement sur une analyse statistique. Or, les tests statistiques, principale méthode recommandée par les institutions gouvernementales (ANSSI, BSI, NIST) pour certifier ces composants, peuvent détecter des anomalies mais ne permettent pas de les identifier, et de les caractériser. Les travaux de cette thèse structurent la modélisation d'une source d'aléa, vue comme une suite de variables aléatoires, affinent les tests statistiques, et ajoutent une analyse temporelle pour détecter et expliciter ses anomalies au niveau global ou local. Les résultats ont été implantés dans une librairie composée d'un simulateur de perturbations, des outils statistiques et temporels obtenus, des batteries de tests recommandées (FIPS, AIS31, Test U01, SP800), et de retraitements appropriés à certaines anomalies. La structure mise en place a permis d'extraire des familles d'anomalies de motifs dont les propriétés rendent certains tests incapables de distinguer la source anormale d'une source idéalement aléatoire. L'analyse des faiblesses inhérentes aux méthodes statistiques a montré que l'interprétation d'un test par intervalle de rejet ou taux de réussite n'est pas adapté à la détection de certaines fautes de transition. Elle a aussi permis d'étudier les méthodes d'estimations d'entropie, notamment les estimateurs proposés dans la norme SP800-90. Par ailleurs, les paramètres de spécifications de certains générateurs, dont un déduit du standard de chiffrement AES, se sont avérés distinguables grâce aux statistiques de test. Les outils temporels développés évaluent la structure des anomalies, leur évolution au cours du temps et analysent les motifs déviants au voisinage d'un motif donné. Cela a permis d'une part d'appliquer les tests statistiques avec des paramètres pertinents, et d'autre part de présenter des retaitements dont la validité repose sur la structure des anomalies et non sur leur amplitude. / While random numbers are frequently used in cryptography (seed, token, ...), news regurlarly prove how bad randomness generation can compromise the wole security of a protocol. Random number generators for crypthography are components with three steps : a source (an algorithm or physical phenomenon) produces raw numbers which are two times postprocessed to fix anomalies. This thesis focuses on the analysis of physical random bit generators in order to extract postprocessing which will be adapted to the anomalies of the source. As the design of a physical random bit generator is complex, its evaluation is mainly a statistical analysis with hypothesis testing. However, the current standards (AIS31, FIPS140-2, Test U01, SP800) can not provide informations to characterize anomalies. Thus, this thesis adjust several tests and add a time analysis to identify and to make global and local anomalies explicit. A C library was developped, providing anomalies simulator and tools to apply statistical and time analysis results on random bit generators.
93

Believe it or not : examining the case for intuitive logic and effortful beliefs

Howarth, Stephanie January 2015 (has links)
The overall objective of this thesis was to test the Default Interventionist (DI) account of belief-bias in human reasoning using the novel methodology introduced by Handley, Newstead & Trippas (2011). DI accounts focus on how our prior beliefs are the intuitive output that bias our reasoning process (Evans, 2006), whilst judgments based on logical validity require effortful processing. However, recent research has suggested that reasoning on the basis of beliefs may not be as fast and automatic as previous accounts claim. In order to investigate whether belief based judgments are resource demanding we instructed participants to reason on the basis of both the validity and believability of a conclusion whilst simultaneously engaging in a secondary task (Experiment 1 - 5). We used both a within and between subjects design (Experiment 5) examining both simple and complex arguments (Experiment 4 – 9). We also analysed the effect of incorporating additional instructional conditions (Experiment 7 – 9) and tested the relationships between various individual differences (ID) measures under belief and logic instruction (Experiment 4, 5, 7, 8, & 9). In line with Handley et al.’s findings we found that belief based judgments were more prone to error and that the logical structure of a problem interfered with judging the believability of its conclusion, contrary to the DI account of reasoning. However, logical outputs sometimes took longer to complete and were more affected by random number generation (RNG) (Experiment 5). To reconcile these findings we examined the role of Working Memory (WM) and Inhibition in Experiments 7 – 9 and found, contrary to Experiment 5, belief judgments were more demanding of executive resources and correlated with ID measures of WM and inhibition. Given that belief based judgments resulted in more errors and were more impacted on by the validity of an argument the behavioural data does not fit with the DI account of reasoning. Consequently, we propose that there are two routes to a logical solution and present an alternative Parallel Competitive model to explain the data. We conjecture that when instructed to reason on the basis of belief an automatic logical output completes and provides the reasoner with an intuitive logical cue which requires inhibiting in order for the belief based response to be generated. This creates a Type 1/Type 2 conflict, explaining the impact of logic on belief based judgments. When explicitly instructed to reason logically, it takes deliberate Type 2 processing to arrive at the logical solution. The engagement in Type 2 processing in order to produce an explicit logical output is impacted on by demanding secondary tasks (RNG) and any task that interferes with the integration of premise information (Experiments 8 and 9) leading to increased latencies. However the relatively simple nature of the problems means that accuracy is less affected. We conclude that the type of instructions provided along with the complexity of the problem and the inhibitory demands of the task all play key roles in determining the difficulty and time course of logical and belief based responses.
94

Turbo Code Performance Analysis Using Hardware Acceleration

Nordmark, Oskar January 2016 (has links)
The upcoming 5G mobile communications system promises to enable use cases requiring ultra-reliable and low latency communications. Researchers therefore require more detailed information about aspects such as channel coding performance at very low block error rates. The simulations needed to obtain such results are very time consuming and this poses achallenge to studying the problem. This thesis investigates the use of hardware acceleration for performing fast simulations of turbo code performance. Special interest is taken in investigating different methods for generating normally distributed noise based on pseudorandom number generator algorithms executed in DSP:s. A comparison is also done regarding how well different simulator program structures utilize the hardware. Results show that even a simple program for utilizing parallel DSP:s can achieve good usage of hardware accelerators and enable fast simulations. It is also shown that for the studied process the bottleneck is the conversion of hard bits to soft bits with addition of normally distributed noise. It is indicated that methods for noise generation which do not adhere to a true normal distribution can further speed up this process and yet yield simulation quality comparable to methods adhering to a true Gaussian distribution. Overall, it is show that the proposed use of hardware acceleration in combination with the DSP software simulator program can in a reasonable time frame generate results for turbo code performance at block error rates as low as 10−9.
95

Precificação de opções exóticas utilizando CUDA / Exotic options pricing using CUDA

Calderaro, Felipe Boteon 17 October 2017 (has links)
No mercado financeiro, a precificação de contratos complexos muitas vezes apoia-se em técnicas de simulação numérica. Estes métodos de precificação geralmente apresentam baixo desempenho devido ao grande custo computacional envolvido, o que dificulta a análise e a tomada de decisão por parte do trader. O objetivo deste trabalho é apresentar uma ferramenta de alto desempenho para a precificação de instrumentos financeiros baseados em simulações numéricas. A proposta é construir uma calculadora eficiente para a precificação de opções multivariadas baseada no método de Monte Carlo, utilizando a plataforma CUDA de programação paralela. Serão apresentados os conceitos matemáticos que embasam a precificação risco-neutra, tanto no contexto univariado quanto no multivariado. Após isso entraremos em detalhes sobre a implementação da simulação Monte Carlo e a arquitetura envolvida na plataforma CUDA. No final, apresentaremos os resultados obtidos comparando o tempo de execução dos algoritmos. / In the financial market, the pricing of complex contracts often relies on numerical simulation techniques. These pricing methods generally present poor performance due to the large computational cost involved, which makes it difficult for the trader to analyze and make decisions. The objective of this work is to present a high performance tool for the pricing of financial instruments based on numerical simulations. The proposal is to present an efficient calculator for the pricing of multivariate options based on the Monte Carlo method, using the parallel programming CUDA platform. The mathematical concepts underlying risk-neutral pricing, both in the univariate and in the multivariate context, will be presented. After this we will detail the implementation of the Monte Carlo simulation and the architecture involved in the CUDA platform. At the end, we will present the results obtained comparing the execution time of the algorithms.
96

Automatic Random Variate Generation for Simulation Input

Hörmann, Wolfgang, Leydold, Josef January 2000 (has links) (PDF)
We develop and evaluate algorithms for generating random variates for simulation input. One group called automatic, or black-box algorithms can be used to sample from distributions with known density. They are based on the rejection principle. The hat function is generated automatically in a setup step using the idea of transformed density rejection. There the density is transformed into a concave function and the minimum of several tangents is used to construct the hat function. The resulting algorithms are not too complicated and are quite fast. The principle is also applicable to random vectors. A second group of algorithms is presented that generate random variates directly from a given sample by implicitly estimating the unknown distribution. The best of these algorithms are based on the idea of naive resampling plus added noise. These algorithms can be interpreted as sampling from the kernel density estimates. This method can be also applied to random vectors. There it can be interpreted as a mixture of naive resampling and sampling from the multi-normal distribution that has the same covariance matrix as the data. The algorithms described in this paper have been implemented in ANSI C in a library called UNURAN which is available via anonymous ftp. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
97

常用統計套裝軟體的U(0,1)亂數產生器之探討

張浩如, Chang, Hao-Ju Unknown Date (has links)
由於電腦的發展與普及,在各個領域的應用上,有越來越多的人利用電腦模擬的結果作為參考的依據。而在電腦模擬的過程中,亂數的產生是相當重要的一環。目前大多數的使用者都是直接利用套裝軟體內設的亂數產生器(random number generator)來產生亂數,但是在一般的文獻中對於各軟體內設的亂數產生器,則少有詳盡的探討。因此本論文的主要目的在於:針對SAS 6.12、SPSS 8.0、EXCEL 97、S-PLUS 2000及MINITAB 12等五種統計分析上常使用的套裝軟體,針對其內設U(0,1)亂數產生器進行較完整的介紹、比較、與探討。除了從週期長短、統計性質、電腦執行效率等三種不同觀點來評估這五種軟體內設亂數產生器的優劣之外,同時亦利用樣本平均蒙地卡羅法(sample-mean Monte Carlo method)在求解積分值上的表現作為電腦模擬的應用實例。 / With the development and popularity of computers, in different fields more and more people are using the result from computer simulation as reference. The generation of random number is one of the most important factors in applying computer simulation. Nowadays most of users use intrinsic random number generators in software to produce random numbers. However, only a few articles focus on detailed comparisons of those random number generators. Thus, in this study, we explore the random number generators in frequently used statistical software; such as SAS 6.12, SPSS 8.0, EXCEL 97, S-PLUS 2000, MINITAB 12, etc. and discuss their performances in uniform (0,1) random number generators. This study focuses not only on the comparison of period length and statistical properties of these random number generators, but also on computer executive efficiency. In addition, we also use sample-mean Monte Carlo method as an integral example of computer simulation to evaluate these random number generators.
98

Automatic Sampling with the Ratio-of-uniforms Method

Leydold, Josef January 1999 (has links) (PDF)
Applying the ratio-of-uniforms method for generating random variates results in very efficient, fast and easy to implement algorithms. However parameters for every particular type of density must be precalculated analytically. In this paper we show, that the ratio-of-uniforms method is also useful for the design of a black-box algorithm suitable for a large class of distributions, including all with log-concave densities. Using polygonal envelopes and squeezes results in an algorithm that is extremely fast. In opposition to any other ratio-of-uniforms algorithm the expected number of uniform random numbers is less than two. Furthermore we show that this method is in some sense equivalent to transformed density rejection. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
99

Progresses In Parallel Random Number Generators

Kasikara, Gulin 01 September 2005 (has links) (PDF)
Monte Carlo simulations are embarrassingly parallel in nature, so having a parallel and efficient random number generator becomes crucial. To have a parallel generator with uncorrelated processors, parallelization methods are implemented together with a binary tree mapping. Although, this method has considerable advantages, because of the constraints arising from the binary tree structure, a situation defined as problem of falling off the tree occurs. In this thesis, a new spawning method that is based on binary tree traversal and new spawn processor appointment is proposed to use when falling off the tree problem is encountered. With this method, it is seen that, spawning operation becomes more costly but the independency of parallel processors is guaranteed. In Monte Carlo simulations, random number generation time should be unperceivable when compared with the execution time of the whole simulation. That is why / linear congruential generators with Mersenne prime moduli are used. In highly branching Monte Carlo simulations, cost of parameterization also gains importance and it becomes reasonable to consider other types of primes or other parallelization methods that provide different balance between parameterization cost and random number generation cost. With this idea in mind, in this thesis, for improving performance of linear congruential generators, two approaches are proposed. First one is using Sophie-Germain primes as moduli and second one is using a hybrid method combining both parameterization and splitting techniques. Performance consequences of Sophie-Germain primes over Mersenne primes are shown through graphics. It is observed that for some cases proposed approaches have better performance consequences.
100

Device-independent randomness generation from several Bell estimators

Nieto-Silleras, Olmo 04 June 2018 (has links)
The device-independent (DI) framework is a novel approach to quantum information science which exploits the nonlocality of quantum physics to certify the correct functioning of a quantum information processing task without relying on any assumption on the inner workings of the devices performing the task. This thesis focuses on the device-independent certification and generation of true randomness for cryptographic applications. The existence of such true randomness relies on a fundamental relation between the random character of quantum theory and its nonlocality, which arises in the context of Bell tests. Device-independent randomness generation (DIRG) and quantum key distribution (DIQKD) protocols usually evaluate the produced randomness (as measured by the conditional min-entropy) as a function of the violation of a given Bell inequality. However, the probabilities characterising the measurement outcomes of a Bell test are richer than the degree of violation of a single Bell inequality. In this work we show that a more accurate assessment of the randomness present in nonlocal correlations can be obtained if the value of several Bell expressions is simultaneously taken into account, or if the full set of probabilities characterising the behaviour of the device is considered. As a side result, we show that to every behaviour there corresponds an optimal Bell expression allowing to certify the maximal amount of DI randomness present in the correlations. Based on these results, we introduce a family of protocols for DIRG secure against classical side information that relies on the estimation of an arbitrary number of Bell expressions, or even directly on the experimental frequencies of the measurement outcomes. The family of protocols we propose also allows for the evaluation of randomness from a subset of measurement settings, which can be advantageous when considering correlations for which some measurement settings result in more randomness than others. We provide numerical examples illustrating the advantage of this method for finite data, and show that asymptotically it results in an optimal generation of randomness from experimental data without having to assume beforehand that the devices violate a specific Bell inequality. / L'approche indépendante des appareils ("device-independent" en anglais) est une nouvelle approche en informatique quantique. Cette nouvelle approche exploite la non-localité de la physique quantique afin de certifier le bon fonctionnement d'une tâche sans faire appel à des suppositions sur les appareils menant à bien cette tâche. Cette thèse traite de la certification et la génération d'aléa indépendante des appareils pour des applications cryptographiques. L'existence de cet aléa repose sur une relation fondamentale entre le caractère aléatoire de la théorie quantique et sa non-localité, mise en lumière dans le cadre des tests de Bell. Les protocoles de génération d'aléa et de distribution quantique de clés indépendants des appareils mesurent en général l'aléa produit en fonction de la violation d'une inégalité de Bell donnée. Cependant les probabilités qui caracterisent les résultats de mesures dans un test de Bell sont plus riches que le degré de violation d'une seule inégalité de Bell. Dans ce travail nous montrons qu'une évaluation plus exacte de l'aléa présent dans les corrélations nonlocales peut être faite si l'on tient compte de plusieurs expressions de Bell à la fois ou de l'ensemble des probabilités (ou comportement) caractérisant l'appareil testé. De plus nous montrons qu'à chaque comportement correspond une expression de Bell optimale permettant de certifier la quantité maximale d'aléa présente dans ces corrélations. À partir de ces resultats, nous introduisons une famille de protocoles de génération d'aléa indépendants des appareils, sécurisés contre des adversaires classiques, et reposant sur l'évaluation de l'aléa à partir d'un nombre arbitraire d'expressions de Bell, ou même à partir des fréquences expérimentales des résultats de mesure. Les protocoles proposés permettent aussi d'évaluer l'aléa à partir d'un sous-ensemble de choix de mesure, ce qui peut être avantageux lorsque l'on considère des corrélations pour lesquelles certains choix de mesure produisent plus d'aléa que d'autres. Nous fournissons des exemples numériques illustrant l'avantage de cette méthode pour des données finies et montrons qu'asymptotiquement cette méthode résulte en un taux de génération d'aléa optimal à partir des données expérimentales, sans devoir supposer à priori que l'expérience viole une inégalité de Bell spécifique. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished

Page generated in 0.0544 seconds