• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 9
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Card-Shuffling Analysis with Weighted Rank Distance

Wu, Kung-sheng 24 June 2007 (has links)
In this paper, we cite two weighted rank distances (Wilcoxon rank and Log rank) to analyze how many times must a deck of 52 cards be shuffled to become sufficiently randomized. Bayer and Diaconis (1992) used the variation distance as a measure of randomness to analyze the card-shuffling. Lin (2006) used the deviation distance to analyze card-shuffling without complicated mathematics formulas. We provide two new ideas to measure the distance for card-shuffling analysis.
12

Statistical Analysis Of Block Ciphers And Hash Functions

Sulak, Fatih 01 February 2011 (has links) (PDF)
One of the most basic properties expected from block ciphers and hash functions is passing statistical randomness testing, as they are supposed to behave like random mappings. Previously, testing of AES candidate block ciphers was done by using the statistical tests defined in the NIST Test Suite. As some of the tests in this suite require long sequences, data sets are formed by concatenating the outputs of the algorithms obtained from various input types. However, the nature of block cipher and hash function algorithms necessitates devising tests and test parameters focused particularly on short sequences, therefore we propose a package of statistical randomness tests which produce reliable results for short sequences and test the outputs of the algorithms directly rather than concatenations. Moreover, we propose an alternative method to evaluate the test results and state the required computations of related probabilities for the new evaluation method. We also propose another package of statistical tests which are designed basing on certain cryptographic properties of block ciphers and hash functions to evaluate their randomness, namely the cryptographic randomness testing. The packages are applied to the AES finalists, and produced more precise results than those obtained in similar applications. Moreover, the packages are also applied to SHA-3 second round candidate algorithms.
13

Promestra Security compared with other random number generators

Korsbakke, Andreas, Ringsell, Robin January 2019 (has links)
Background. Being able to trust cryptographic algorithms is a crucial part of society today, because of all the information that is gathered by companies all over the world. With this thesis, we want to help both Promestra AB and potential future customers to evaluate if you can trust their random number generator. Objectives. The main objective for the study is to compare the random number generator in Promestra security with the help of the test suite made by the NationalInstitute of Standards and Technology. The comparison will be made with other random number generators such as Mersenne Twister, Blum-Blum-Schub and more. Methods. The selected method in this study was to gather a total of 100 million bits of each random number generator and use these in the National Institute ofStandards and Technology test suite for 100 tests to get a fair evaluation of the algorithms. The test suite provides a statistical summary which was then analyzed. Results. The results show how many iterations out of 100 that have passed and also the distribution between the results. The obtained results show that there are some random number generators that have been tested that clearly struggles in many of the tests. It also shows that half of the tested generators passed all of the tests. Conclusions. Promestra security and Blum-Blum-Schub is close to passing all the tests, but in the end, they cannot be considered to be the preferable random number generator. The five that passed and seem to have no clear limitations are:Random.org, Micali-Schnorr, Linear-Congruential, CryptGenRandom, and MersenneTwister.
14

Improve the Convergence Speed and Stability of Generative Adversarial Networks

Zou, Xiaozhou 26 April 2018 (has links)
In this thesis, we address two major problems in Generative Adversarial Networks (GAN), an important sub-field in deep learning. The first problem that we address is the instability in the training process that happens in many real-world problems and the second problem that we address is the lack of a good evaluation metric for the performance of GAN algorithms. To understand and address the first problem, three approaches are developed. Namely, we introduce randomness to the training process; we investigate various normalization methods; most importantly we develop a better parameter initialization strategy to help stabilize training. In the randomness techniques part of the thesis, we developed two randomness approaches, namely the addition of gradient noise and the batch random flipping of the results from the discrimination section of a GAN. In the normalization part of the thesis, we compared the performances of the z-score transform, the min-max normalization, affine transformations and batch normalization. In the most novel and important part of this thesis, we developed techniques to initialize the GAN generator section with parameters that can produce a uniform distribution on the range of the training data. As far as we are aware, this seemingly simple idea has not yet appeared in the extant literature, and the empirical results we obtain on 2-dimensional synthetic data show marked improvement. As to better evaluation metrics, we demonstrate a simple yet effective way to evaluate the effectiveness of the generator using a novel "overlap loss".
15

Improve the Convergence Speed and Stability of Generative Adversarial Networks

Zou, Xiaozhou 26 April 2018 (has links)
In this thesis, we address two major problems in Generative Adversarial Networks (GAN), an important sub-field in deep learning. The first problem that we address is the instability in the training process that happens in many real-world problems and the second problem that we address is the lack of a good evaluation metric for the performance of GAN algorithms. To understand and address the first problem, three approaches are developed. Namely, we introduce randomness to the training process; we investigate various normalization methods; most importantly we develop a better parameter initialization strategy to help stabilize training. In the randomness techniques part of the thesis, we developed two randomness approaches, namely the addition of gradient noise and the batch random flipping of the results from the discrimination section of a GAN. In the normalization part of the thesis, we compared the performances of the z-score transform, the min-max normalization, affine transformations and batch normalization. In the most novel and important part of this thesis, we developed techniques to initialize the GAN generator section with parameters that can produce a uniform distribution on the range of the training data. As far as we are aware, this seemingly simple idea has not yet appeared in the extant literature, and the empirical results we obtain on 2-dimensional synthetic data show marked improvement. As to better evaluation metrics, we demonstrate a simple yet effective way to evaluate the effectiveness of the generator using a novel "overlap loss".
16

Distributed computing and cryptography with general weak random sources

Li, Xin, Ph. D. 14 August 2015 (has links)
The use of randomness in computer science is ubiquitous. Randomized protocols have turned out to be much more efficient than their deterministic counterparts. In addition, many problems in distributed computing and cryptography are impossible to solve without randomness. However, these applications typically require uniform random bits, while in practice almost all natural random phenomena are biased. Moreover, even originally uniform random bits can be damaged if an adversary learns some partial information about these bits. In this thesis, we study how to run randomized protocols in distributed computing and cryptography with imperfect randomness. We use the most general model for imperfect randomness where the weak random source is only required to have a certain amount of min-entropy. One important tool here is the randomness extractor. A randomness extractor is a function that takes as input one or more weak random sources, and outputs a distribution that is close to uniform in statistical distance. Randomness extractors are interesting in their own right and are closely related to many other problems in computer science. Giving efficient constructions of randomness extractors with optimal parameters is one of the major open problems in the area of pseudorandomness. We construct network extractor protocols that extract private random bits for parties in a communication network, assuming that they each start with an independent weak random source, and some parties are corrupted by an adversary who sees all communications in the network. These protocols imply fault-tolerant distributed computing protocols and secure multi-party computation protocols where only imperfect randomness is available. The probabilistic method shows that there exists an extractor for two independent sources with logarithmic min-entropy, while known constructions are far from achieving these parameters. In this thesis we construct extractors for two independent sources with any linear min-entropy, based on a computational assumption. We also construct the best known extractors for three independent sources and affine sources. Finally we study the problem of privacy amplification. In this model, two parties share a private weak random source and they wish to agree on a private uniform random string through communications in a channel controlled by an adversary, who has unlimited computational power and can change the messages in arbitrary ways. All previous results assume that the two parties have local uniform random bits. We show that this problem can be solved even if the two parties only have local weak random sources. We also improve previous results in various aspects by constructing the first explicit non-malleable extractor and giving protocols based on this extractor.
17

Two-Fold Role of Randomness: A Source of Both Long-Range Correlations and Ordinary Statistical Mechanics

Rocco, A. (Andrea) 12 1900 (has links)
The role of randomness as a generator of long range correlations and ordinary statistical mechanics is investigated in this Dissertation. The difficulties about the derivation of thermodynamics from mechanics are pointed out and the connection between the ordinary fluctuation-dissipation process and possible anomalous properties of statistical systems is highlighted.
18

Getting Mortdogged : How high-ranking players experience randomness in Teamfight Tactics (TFT)

Feig, Maximilian, Hagerman, Adrian January 2023 (has links)
This bachelor thesis seeks to find out what players’ experiences with randomness in Teamfight Tactics (TFT) are like, what makes these experiences good and bad, and what a game developer can learn from them. TFT and Auto-Battlers in general are a topic that has not seen much research despite their immense popularity since their inception. In this research, we interviewed TFT players using semi-structured qualitative interviews asking them about their experiences with randomness, which ones they particularly remember, like or dislike and what they think about the way the game uses randomness in general. Overall, we found that players liked the random aspects of the game and thought that these mostly increase their enjoyment of the game as they provide replayability. Players also noted that skill matters much more than randomness when determining the outcome of a game. The most important takeaway from this study was that players, despite the fact that the larger community often speaks negatively and complains about the randomness, ultimately consider randomness a positive aspect of the game. This also echoes previous research on how randomness can enhance the play experience. Giving players sufficient control over their situation by offering them meaningful choices in the face of randomness is the main way that this can be done.
19

Controlling Randomness: Using Procedural Generation to Influence Player Uncertainty in Video Games

Fort, Travis 01 May 2015 (has links)
As video games increase in complexity and length, the use of automatic, or procedural, content generation has become a popular way to reduce the stress on game designers. However, the usage of procedural generation has certain consequences; in many instances, what the computer generates is uncertain to the designer. The intent of this thesis is to demonstrate how procedural generation can be used to intentionally affect the embedded randomness of a game system, enabling game designers to influence the level of uncertainty a player experiences in a nuanced way. This control affords game designers direct control over complex problems like dynamic difficulty adjustment, pacing, or accessibility. Game design will be examined from the perspective of uncertainty and how procedural generation can be used to intentionally add or reduce uncertainty. Various procedural generation techniques will be discussed alongside the types of uncertainty, using case studies to demonstrate the principles in action.
20

Mechanical Behavior of Ceramics at High Temperatures: Constitutive Modeling and Numerical Implementation

POWERS, LYNN MARIE 09 June 2006 (has links)
No description available.

Page generated in 0.0459 seconds