421 |
Generátor náhodných čísel / Random number generatorZouhar, Petr January 2010 (has links)
The thesis deals with issues of random numbers, their generating and use in cryptography. Introduction of work is aimed to resolution of random number generators and pseudo--random number generators. There is also included often used dividing generators on software and hardware. We mention advantages and disadvantages of each type and area of their use. Then we describe examples of random and pseudorandom numbers, mainly hardware based on physical phenomenon such as the decay of radioactive material or use atmospheric noise. The following part is devoted to suggestion own random number generator and a description of its functionality. In the second half of the work we devote to the field of cryptography. We know basic types of cryptographic systems, namely symmetric and asymmetric cryptosystems. We introduce a typical representant the various type and their properties. At the end of the work we again return to our random number generator and verify the randomness generated numbers and obtained cryptograms.
|
422 |
Simulation of Weakly Correlated Functions and its Application to Random Surfaces and Random PolynomialsFellenberg, Benno, Scheidt, Jürgen vom, Richter, Matthias 30 October 1998 (has links)
The paper is dedicated to the modeling and the
simulation of random processes and fields.
Using the concept and the theory of weakly
correlated functions a consistent representation
of sufficiently smooth random processes
will be derived. Special applications will be
given with respect to the simulation of road
surfaces in vehicle dynamics and to the
confirmation of theoretical results with
respect to the zeros of random polynomials.
|
423 |
Random Geometric StructuresGrygierek, Jens Jan 30 January 2020 (has links)
We construct and investigate random geometric structures that are based on a homogeneous Poisson point process.
We investigate the random Vietoris-Rips complex constructed as the clique complex of the well known gilbert graph as an infinite random simplicial complex and prove that every realizable finite sub-complex will occur infinitely many times almost sure as isolated complex and also in the case of percolations connected to the unique giant component. Similar results are derived for the Cech complex.
We derive limit theorems for the f-vector of the Vietoris-Rips complex on the unit cube centered at the origin and provide a central limit theorem and a Poisson limit theorem based on the model parameters.
Finally we investigate random polytopes that are given as convex hulls of a Poisson point process in a smooth convex body. We establish a central limit theorem for certain linear combinations of intrinsic volumes.
A multivariate limit theorem involving the sequence of intrinsic volumes and the number of i-dimensional faces is derived.
We derive the asymptotic normality of the oracle estimator of minimal variance for estimation of the volume of a convex body.
|
424 |
On Truncations of Haar Distributed Random MatricesStewart, Kathryn Lockwood 23 May 2019 (has links)
No description available.
|
425 |
Goodness-of-Fit Tests For Dirichlet Distributions With ApplicationsLi, Yi 23 July 2015 (has links)
No description available.
|
426 |
A computer simulation study for comparing three methods of estimating variance componentsWalsh, Thomas Richard January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
427 |
Active learning via Transduction in Regression ForestsHansson, Kim, Hörlin, Erik January 2015 (has links)
Context. The amount of training data required to build accurate modelsis a common problem in machine learning. Active learning is a techniquethat tries to reduce the amount of required training data by making activechoices of which training data holds the greatest value.Objectives. This thesis aims to design, implement and evaluate the Ran-dom Forests algorithm combined with active learning that is suitable forpredictive tasks with real-value data outcomes where the amount of train-ing data is small. machine learning algorithms traditionally requires largeamounts of training data to create a general model, and training data is inmany cases sparse and expensive or difficult to create.Methods.The research methods used for this thesis is implementation andscientific experiment. An approach to active learning was implementedbased on previous work for classification type problems. The approachuses the Mahalanobis distance to perform active learning via transduction.Evaluation was done using several data sets were the decrease in predictionerror was measured over several iterations. The results of the evaluationwas then analyzed using nonparametric statistical testing.Results. The statistical analysis of the evaluation results failed to detect adifference between our approach and a non active learning approach, eventhough the proposed algorithm showed irregular performance. The evalu-ation of our tree-based traversal method, and the evaluation of the Maha-lanobis distance for transduction both showed that these methods performedbetter than Euclidean distance and complete graph traversal.Conclusions. We conclude that the proposed solution did not decreasethe amount of required training data on a significant level. However, theapproach has potential and future work could lead to a working active learn-ing solution. Further work is needed on key areas of the implementation,such as the choice of instances for active learning through transduction un-certainty as well as choice of method for going from transduction model toinduction model.
|
428 |
The art of forecasting – an analysis of predictive precision of machine learning modelsKalmár, Marcus, Nilsson, Joel January 2016 (has links)
Forecasting is used for decision making and unreliable predictions can instill a false sense of condence. Traditional time series modelling is astatistical art form rather than a science and errors can occur due to lim-itations of human judgment. In minimizing the risk of falsely specifyinga process the practitioner can make use of machine learning models. Inan eort to nd out if there's a benet in using models that require lesshuman judgment, the machine learning models Random Forest and Neural Network have been used to model a VAR(1) time series. In addition,the classical time series models AR(1), AR(2), VAR(1) and VAR(2) havebeen used as comparative foundation. The Random Forest and NeuralNetwork are trained and ultimately the models are used to make pre-dictions evaluated by RMSE. All models yield scattered forecast resultsexcept for the Random Forest that steadily yields comparatively precisepredictions. The study shows that there is denitive benet in using Random Forests to eliminate the risk of falsely specifying a process and do infact provide better results than a correctly specied model.
|
429 |
Bias in Random Forest Variable Importance Measures: Illustrations, Sources and a SolutionStrobl, Carolin, Boulesteix, Anne-Laure, Zeileis, Achim, Hothorn, Torsten January 2006 (has links) (PDF)
Variable importance measures for random forests have been receiving increased attention as a means of variable selection in many classification tasks in bioinformatics and related scientific fields, for instance to select a subset of genetic markers relevant for the prediction of a certain disease. We show that random forest variable importance measures are a sensible means for variable selection in many applications, but are not reliable in situations where potential predictor variables vary in their scale level or their number of categories. This is particularly important in genomics and computational biology, where predictors often include variables of different types. Simulation studies are presented illustrating that, when random forest variable importance measures are used with data of varying types, the results are misleading because suboptimal predictor variables may be artificially preferred in variable selection. The two mechanisms underlying this deficiency are biased variable selection in the individual classification trees used to build the random forest on one hand, and effects induced by bootstrap sampling with replacement on the other hand. We propose to employ an alternative implementation of random forests, that provides unbiased variable selection in the individual classification trees. When this method is applied using subsampling without replacement, the resulting variable importance measures can be used reliably for variable selection even in situations where the potential predictor variables vary in their scale level or their number of categories. The usage of both random forest algorithms and their variable importance measures in the R system for statistical computing is illustrated and documented thoroughly in an application re-analysing data from a study on RNA editing. Therefore the suggested method can be applied straightforwardly by scientists in bioinformatics research. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
430 |
Two-phase behaviour in a sequence of random variablesMutombo, Pierre Abraham Mulamba 03 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: Buying and selling in financial markets are driven by demand. The demand can be quantified
by the imbalance in the number of shares QB and QS transacted by buyers and
sellers respectively over a given time interval t. The demand in an interval t is given
by
(t) = QB − QS. The local noise intensity is given by = h|aiqi − haiqii|i where
i = 1, . . . ,N labels the transactions in t, qi is the number of shares traded in transaction
i, ai = ±1 denotes buyer- initiated and seller- initiated trades respectively and h· · · i is the
local expectation value computed from all the transactions during the interval t.
In a paper [1] based on data from the New York Stock Exchange Trade and Quote database
during the period 1995-1996, Plerou, Gopikrishnan and Stanley [1] reported that the analysis
of the probability distribution P(
| ) of demand conditioned on the local noise intensity
revealed the surprising existence of a critical threshold c. For < c, the most
probable value of demand is roughly zero; they interpreted this as an equilibrium phase
in which neither buying nor selling predominates. For > c two most probable values
emerge that are symmetrical around zero demand, corresponding to excess demand and
excess supply; they interpreted this as an out-of-equilibrium phase in which the market
behaviour is buying for half of the time, and selling for the other half.
It was suggested [1] that the two-phase behaviour indicates a link between the dynamics
of a financial market with many interacting participants and the phenomenon of phase
transitions that occurs in physical systems with many interacting units.
This thesis reproduces the two-phase behaviour by means of experiments using sequences
of random variables. We reproduce the two-phase behaviour based on correlated and
uncorrelatd data. We use a Markov modulated Bernoulli process to model the transactions and investigate a simple interpretation of the two-phase behaviour. We sample data from
heavy-tailed distributions and reproduce the two-phase behaviour.
Our experiments show that the results presented in [1] do not provide evidence for the
presence of complex phenomena in a trading market; the results are a consequence of the
sampling method employed. / AFRIKAANSE OPSOMMING: Aankope en verkope in finansi¨ele markte word deur aanvraag gedryf. Aanvraag kan gekwantifiseer
word in terme van die ongebalanseerdheid in die getal aandele QB en QB soos
onderskeidelik verhandel deur kopers en verkopers in ’n gegewe tyd-interval t. Die aanvraag
in ’n interval t word gegee deur
(t) = QB −QS. Die lokale geraasintensiteit word
gegee deur = h|aiqi − haiqii|i waar i = 1, . . . ,N die transaksies in t benoem, qi die
getal aandele verhandel in transaksies verwys, en h· · · i op die lokale verwagte waarde dui,
bereken van al die tansaksies tydens die interval t.
In ’n referaat [1] wat op data van die New York Effektebeurs se Trade and Quote databasis
in die periode tussen 1995 en 1996 geskoei was, het Plerou, Gopikrishnan en Stanley
[1] gerapporteer dat ’n analise van die waarskynlikheidsverspreiding P(
| ) van aanvraag
gekondisioneer op die lokale geraasintensiteit , die verrassende bestaan van ’n kritieke
drempelwaarde c na vore bring. Vir < c is die mees waarskynlike aanvraagwaarde
nagenoeg nul; hulle het dit ge¨ınterpreteer as ’n ekwilibriumfase waartydens n`og aankope
n`og verkope die oormag het. Vir > c is die twee mees waarskynlike aanvraagwaardes
wat te voorskyn kom simmetries rondom nul aanvraag, wat oorenstem met ’n oormaat aanvraag
en ’n oormaat aanbod; hulle het dit geinterpreteer as ’n buite-ewewigfase waartydens
die markgedrag die helfte van die tyd koop en die anderhelfte verkoop.
Daar is voorgestel [1] dat die tweefase gedrag op ’n verband tussen die dinamiek van ’n
finansiele mark met baie deelnemende partye, en die verskynsel van fase-oorgange wat in
fisieke sisteme met baie wisselwerkende eenhede voorkom, dui.
Hierdie tesis reproduseer die tweefase gedrag deur middel van eksperimente wat gebruik
maak van reekse van lukrake veranderlikes. Ons reproduseer die tweefase gedrag gebaseer op gekorreleerde en ongekorreleerde data. Ons gebruik ’n Markov-gemoduleerde Bernoulli
proses om die transaksies te moduleer en ondersoek ’n eenvoudige interpretasie van die
tweefase gedrag.
Ons seem steekproefdata van “heavy-tailed” verspreidings en reproduseer die tweefase
gedrag.
Ons ekperimente wys dat die resultate in [1] voorgested is nie bewys lewer vir die teenwoordigheid
van komplekse verskynsel in’n handelsmark nie; die resultate is as gevolg van die
metode wat gebruik is vir die generering van die steekproefdata.
|
Page generated in 0.0519 seconds