Spelling suggestions: "subject:"chaining"" "subject:"haining""
61 |
Random parameters in learning: advantages and guaranteesEvzenie Coupkova (18396918) 22 April 2024 (has links)
<p dir="ltr">The generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen. We study a family of low-complexity classifiers consisting of thresholding a random one-dimensional feature. The feature is obtained by projecting the data on a random line after embedding it into a higher-dimensional space parametrized by monomials of order up to k. More specifically, the extended data is projected n-times and the best classifier among those n, based on its performance on training data, is chosen. </p><p dir="ltr">We show that this type of classifier is extremely flexible, as it is likely to approximate, to an arbitrary precision, any continuous function on a compact set as well as any Boolean function on a compact set that splits the support into measurable subsets. In particular, given full knowledge of the class conditional densities, the error of these low-complexity classifiers would converge to the optimal (Bayes) error as k and n go to infinity. On the other hand, if only a training dataset is given, we show that the classifiers will perfectly classify all the training points as k and n go to infinity. </p><p dir="ltr">We also bound the generalization error of our random classifiers. In general, our bounds are better than those for any classifier with VC dimension greater than O(ln(n)). In particular, our bounds imply that, unless the number of projections n is extremely large, there is a significant advantageous gap between the generalization error of the random projection approach and that of a linear classifier in the extended space. Asymptotically, as the number of samples approaches infinity, the gap persists for any such n. Thus, there is a potentially large gain in generalization properties by selecting parameters at random, rather than optimization. </p><p dir="ltr">Given a classification problem and a family of classifiers, the Rashomon ratio measures the proportion of classifiers that yield less than a given loss. Previous work has explored the advantage of a large Rashomon ratio in the case of a finite family of classifiers. Here we consider the more general case of an infinite family. We show that a large Rashomon ratio guarantees that choosing the classifier with the best empirical accuracy among a random subset of the family, which is likely to improve generalizability, will not increase the empirical loss too much. </p><p dir="ltr">We quantify the Rashomon ratio in two examples involving infinite classifier families in order to illustrate situations in which it is large. In the first example, we estimate the Rashomon ratio of the classification of normally distributed classes using an affine classifier. In the second, we obtain a lower bound for the Rashomon ratio of a classification problem with a modified Gram matrix when the classifier family consists of two-layer ReLU neural networks. In general, we show that the Rashomon ratio can be estimated using a training dataset along with random samples from the classifier family and we provide guarantees that such an estimation is close to the true value of the Rashomon ratio.</p>
|
62 |
Návrh hardwarového šifrovacího modulu / Design of hardware cipher moduleBayer, Tomáš January 2009 (has links)
This diploma’s thesis discourses the cryptographic systems and ciphers, whose function, usage and practical implementation are analysed. In the first chapter basic cryptographic terms, symmetric and asymetric cryptographic algorithms and are mentioned. Also usage and reliability are analysed. Following chapters mention substitution, transposition, block and stream ciphers, which are elementary for most cryptographic algorithms. There are also mentioned the modes, which the ciphers work in. In the fourth chapter are described the principles of some chosen cryptographic algorithms. The objective is to make clear the essence of the algorithms’ behavior. When describing some more difficult algorithms the block scheme is added. At the end of each algorithm’s description the example of practical usage is written. The chapter no. five discusses the hardware implementation. Hardware and software implementation is compared from the practical point of view. Several design instruments are described and different hardware design programming languages with their progress, advantages and disadvantages are mentioned. Chapter six discourses the hardware implementation design of chosen ciphers. Concretely the design of stream cipher with pseudo-random sequence generator is designed in VHDL and also in Matlab. As the second design was chosen the block cipher GOST, which was designed in VHDL too. Both designs were tested and verified and then the results were summarized.
|
63 |
Analýza a optimalizace datové komunikace pro telemetrické systémy v energetice / Analysis and Optimization of Data Communication for Telemetric Systems in EnergyFujdiak, Radek January 2017 (has links)
Telemetry system, Optimisation, Sensoric networks, Smart Grid, Internet of Things, Sensors, Information security, Cryptography, Cryptography algorithms, Cryptosystem, Confidentiality, Integrity, Authentication, Data freshness, Non-Repudiation.
|
Page generated in 0.0579 seconds