• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 5
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 37
  • 12
  • 12
  • 12
  • 12
  • 11
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Comparative Study on Methods for Stochastic Number Generation

Shenoi, Sangeetha Chandra January 2017 (has links)
No description available.
22

SIGNATURE FILES FOR DOCUMENT MANAGEMENT

ABEYSINGHE, RUVINI PRADEEPA 11 October 2001 (has links)
No description available.
23

A Statistical Evaluation of Algorithms for Independently Seeding Pseudo-Random Number Generators of Type Multiplicative Congruential (Lehmer-Class).

Stewart, Robert Grisham 14 August 2007 (has links)
To be effective, a linear congruential random number generator (LCG) should produce values that are (a) uniformly distributed on the unit interval (0,1) excluding endpoints and (b) substantially free of serial correlation. It has been found that many statistical methods produce inflated Type I error rates for correlated observations. Theoretically, independently seeding an LCG under the following conditions attenuates serial correlation: (a) simple random sampling of seeds, (b) non-replicate streams, (c) non-overlapping streams, and (d) non-adjoining streams. Accordingly, 4 algorithms (each satisfying at least 1 condition) were developed: (a) zero-leap, (b) fixed-leap, (c) scaled random-leap, and (d) unscaled random-leap. Note that the latter satisfied all 4 independent seeding conditions. To assess serial correlation, univariate and multivariate simulations were conducted at 3 equally spaced intervals for each algorithm (N=24) and measured using 3 randomness tests: (a) the serial correlation test, (b) the runs up test, and (c) the white noise test. A one-way balanced multivariate analysis of variance (MANOVA) was used to test 4 hypotheses: (a) omnibus, (b) contrast of unscaled vs. others, (c) contrast of scaled vs. others, and (d) contrast of fixed vs. others. The MANOVA assumptions of independence, normality, and homogeneity were satisfied. In sum, the seeding algorithms did not differ significantly from each other (omnibus hypothesis). For the contrast hypotheses, only the fixed-leap algorithm differed significantly from all other algorithms. Surprisingly, the scaled random-leap offered the least difference among the algorithms (theoretically this algorithm should have produced the second largest difference). Although not fully supported by the research design used in this study, it is thought that the unscaled random-leap algorithm is the best choice for independently seeding the multiplicative congruential random number generator. Accordingly, suggestions for further research are proposed.
24

Variants of Transformed Density Rejection and Correlation Induction

Leydold, Josef, Janka, Erich, Hörmann, Wolfgang January 2001 (has links) (PDF)
In this paper we present some variants of transformed density rejection (TDR) that provide more flexibility (including the possibility to halve the expected number of uniform random numbers) at the expense of slightly higher memory requirements. Using a synchronized first stream of uniform variates and a second auxiliary stream (as suggested by Schmeiser and Kachitvichyanukul (1990)) TDR is well suited for correlation induction. Thus high positive and negative correlation between two streams of random variates with same or different distributions can be induced. The software can be downloaded from the UNURAN project page. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
25

Rejection-Inversion to Generate Variates from Monotone Discrete Distributions

Hörmann, Wolfgang, Derflinger, Gerhard January 1996 (has links) (PDF)
For discrete distributions a variant of rejection from a continuous hat function is presented. The main advantage of the new method, called rejection-inversion, is that no extra uniform random number to decide between acceptance and rejection is required which means that the expected number of uniform variates required is halved. Using rejection-inversion and a squeeze, a simple universal method for a large class of monotone discrete distributions is developed. It can be used to generate variates from the tails of most standard discrete distributions. Rejection-inversion applied to the Zipf (or zeta) distribution results in algorithms that are short and simple and at least twice as fast as the fastest methods suggested in the literature. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
26

Universal Algorithms as an Alternative for Generating Non-Uniform Continuous Random Variates

Leydold, Josef, Hörmann, Wolfgang January 2000 (has links) (PDF)
This paper presents an overview of the most powerful universal methods. These are based on acceptance/rejection techniques where hat and squeezes are constructed automatically. Although originally motivated to sample from non-standard distributions these methods have advantages that make them attractive even for sampling from standard distributions and thus are an alternative to special generators tailored for particular distributions. Most important are: the marginal generation time is fast and does not depend on the distribution. They can be used for variance reduction techniques, and they produce random numbers of predictable quality. These algorithms are implemented in a library, called UNURAN, which is available by anonymous ftp. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
27

Aspectos históricos e teóricos das loterias / Historical and theoretical aspects of lotteries

Freitas, Mateus Almeida de 01 October 2013 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-11-13T19:09:39Z No. of bitstreams: 2 Dissertação - Mateus Almeida de Freitas - 2013.pdf: 3394699 bytes, checksum: 961ba2541fd6132e82af00735f2bdb98 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2014-11-13T19:12:45Z (GMT) No. of bitstreams: 2 Dissertação - Mateus Almeida de Freitas - 2013.pdf: 3394699 bytes, checksum: 961ba2541fd6132e82af00735f2bdb98 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-11-13T19:12:45Z (GMT). No. of bitstreams: 2 Dissertação - Mateus Almeida de Freitas - 2013.pdf: 3394699 bytes, checksum: 961ba2541fd6132e82af00735f2bdb98 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-10-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Historical Aspects of Theoretical and Lotteries begins with the historical context of gambling; shows examples of such games, such as launching a data or existing machines in casinos, brings Article 50 of Law 3688 of October 3, 1941 (which de nes what is considered gambling), shows curious facts involving games such as the construction of the GreatWall of China, starting around 221 BC, and was partly funded by lottery, said the beginning of the lotteries in Brasil, which occurs during the colonial period, more precisely in Minas Gerais. The paper also presents the evolution of the lotteries, 1784 until our present day and is two games o ered by lotteries Brazilian telling some stories. This work has a mathematical approach , with applications aimed at adds universe of lotteries, in particular two products o ered by the lotteries box: Mega-Sena and Quina. Two methods for generating random numbers present some applications to generate random sequences in simulation betting results. / Aspectos Históricos e Teóricos das Loterias inicia-se com o contexto histórico sobre jogos de azar; mostra exemplos de tais jogos, como o lançamento de um dado ou máquinas existentes em cassinos, traz o artigo 50 da lei 3.688 de 03 de outubro de 1941 (que de ne o que é considerado jogo de azar), mostra fatos curiosos envolvendo jogos, como a construção da Grande Muralha da China, iniciada por volta de 221 a.C., e que foi em parte nanciada por uma loteria; comenta o início das loterias no Brasil, que ocorre no período colonial, mais precisamente em Minas Gerais. O trabalho apresenta também a evolução das loterias, de 1784 até nossos dias atuais e trata de dois jogos oferecidos pelas loterias brasileiras contando um pouco de suas histórias. O presente trabalho tem um enfoque matemático, com aplicações de probabilidades voltadas ao universo das loterias federais, em especial de dois produtos ofertados pelas Loterias Caixa: a Mega-Sena e a Quina. Utilizando dois métodos de geração de números aleatórios apresentaremos algumas aplicações de geração de sequências aleatórias na simulação de resultados de apostas.
28

An Investigation of Methods to Improve Area and Performance of Hardware Implementations of a Lattice Based Cryptosystem

Beckwith, Luke Parkhurst 05 November 2020 (has links)
With continuing research into quantum computing, current public key cryptographic algorithms such as RSA and ECC will become insecure. These algorithms are based on the difficulty of integer factorization or discrete logarithm problems, which are difficult to solve on classical computers but become easy with quantum computers. Because of this threat, government and industry are investigating new public key standards, based on mathematical assumptions that remain secure under quantum computing. This paper investigates methods of improving the area and performance of one of the proposed algorithms for key exchanges, "NewHope." We describe a pipelined FPGA implementation of NewHope512cpa which dramatically increases the throughput for a similar design area. Our pipelined encryption implementation achieves 652.2 Mbps and a 0.088 Mbps/LUT throughput-to-area (TPA) ratio, which are the best known results to date, and achieves an energy efficiency of 0.94 nJ/bit. This represents TPA and energy efficiency improvements of 10.05× and 8.58×, respectively, over a non-pipelined approach. Additionally, we investigate replacing the large SHAKE XOF (hash) function with a lightweight Trivium based PRNG, which reduces the area by 32% and improves energy efficiency by 30% for the pipelined encryption implementation, and which could be considered for future cipher specifications. / Master of Science / Cryptography is prevalent in almost every aspect of our lives. It is used to protect communication, banking information, and online transactions. Current cryptographic protections are built specifically upon public key encryption, which allows two people who have never communicated before to setup a secure communication channel. However, due to the nature of current cryptographic algorithms, the development of quantum computers will make it possible to break the algorithms that secure our communications. Because of this threat, new algorithms based on principles that stand up to quantum computing are being investigated to find a suitable alternative to secure our systems. These algorithms will need to be efficient in order to keep up with the demands of the ever growing internet. This paper investigates four hardware implementations of a proposed quantum-secure algorithm to explore ways to make designs more efficient. The improvements are valuable for high throughput applications, such as a server which must handle a large number of connections at once.
29

Believe it or not : examining the case for intuitive logic and effortful beliefs

Howarth, Stephanie January 2015 (has links)
The overall objective of this thesis was to test the Default Interventionist (DI) account of belief-bias in human reasoning using the novel methodology introduced by Handley, Newstead & Trippas (2011). DI accounts focus on how our prior beliefs are the intuitive output that bias our reasoning process (Evans, 2006), whilst judgments based on logical validity require effortful processing. However, recent research has suggested that reasoning on the basis of beliefs may not be as fast and automatic as previous accounts claim. In order to investigate whether belief based judgments are resource demanding we instructed participants to reason on the basis of both the validity and believability of a conclusion whilst simultaneously engaging in a secondary task (Experiment 1 - 5). We used both a within and between subjects design (Experiment 5) examining both simple and complex arguments (Experiment 4 – 9). We also analysed the effect of incorporating additional instructional conditions (Experiment 7 – 9) and tested the relationships between various individual differences (ID) measures under belief and logic instruction (Experiment 4, 5, 7, 8, & 9). In line with Handley et al.’s findings we found that belief based judgments were more prone to error and that the logical structure of a problem interfered with judging the believability of its conclusion, contrary to the DI account of reasoning. However, logical outputs sometimes took longer to complete and were more affected by random number generation (RNG) (Experiment 5). To reconcile these findings we examined the role of Working Memory (WM) and Inhibition in Experiments 7 – 9 and found, contrary to Experiment 5, belief judgments were more demanding of executive resources and correlated with ID measures of WM and inhibition. Given that belief based judgments resulted in more errors and were more impacted on by the validity of an argument the behavioural data does not fit with the DI account of reasoning. Consequently, we propose that there are two routes to a logical solution and present an alternative Parallel Competitive model to explain the data. We conjecture that when instructed to reason on the basis of belief an automatic logical output completes and provides the reasoner with an intuitive logical cue which requires inhibiting in order for the belief based response to be generated. This creates a Type 1/Type 2 conflict, explaining the impact of logic on belief based judgments. When explicitly instructed to reason logically, it takes deliberate Type 2 processing to arrive at the logical solution. The engagement in Type 2 processing in order to produce an explicit logical output is impacted on by demanding secondary tasks (RNG) and any task that interferes with the integration of premise information (Experiments 8 and 9) leading to increased latencies. However the relatively simple nature of the problems means that accuracy is less affected. We conclude that the type of instructions provided along with the complexity of the problem and the inhibitory demands of the task all play key roles in determining the difficulty and time course of logical and belief based responses.
30

Turbo Code Performance Analysis Using Hardware Acceleration

Nordmark, Oskar January 2016 (has links)
The upcoming 5G mobile communications system promises to enable use cases requiring ultra-reliable and low latency communications. Researchers therefore require more detailed information about aspects such as channel coding performance at very low block error rates. The simulations needed to obtain such results are very time consuming and this poses achallenge to studying the problem. This thesis investigates the use of hardware acceleration for performing fast simulations of turbo code performance. Special interest is taken in investigating different methods for generating normally distributed noise based on pseudorandom number generator algorithms executed in DSP:s. A comparison is also done regarding how well different simulator program structures utilize the hardware. Results show that even a simple program for utilizing parallel DSP:s can achieve good usage of hardware accelerators and enable fast simulations. It is also shown that for the studied process the bottleneck is the conversion of hard bits to soft bits with addition of normally distributed noise. It is indicated that methods for noise generation which do not adhere to a true normal distribution can further speed up this process and yet yield simulation quality comparable to methods adhering to a true Gaussian distribution. Overall, it is show that the proposed use of hardware acceleration in combination with the DSP software simulator program can in a reasonable time frame generate results for turbo code performance at block error rates as low as 10−9.

Page generated in 0.134 seconds