1 |
INVESTIGATION OF THE FEASIBILTY OF USING CHIRP-EVOKED ABR IN ESTIMATION OF LOUDNESS GROWTHHoseingholizade, Sima 11 1900 (has links)
Loudness growth evaluation is important to comprehend the theoretical implication of loudness in both normal hearing and hearing impaired people, as well as applied applications in hearing-aid design. However, current psychoacoustic procedures are subjective, time consuming and require the constant attention of participants. The primary aim of the present study is to investigate the feasibility of objectively assessing the loudness growth function by using the Auditory Brainstem Response (ABR). Previous studies applied either non-frequency specific click stimuli or tone burst stimuli to evoke auditory brainstem responses. Although the advantage of a chirp stimulus in producing a more reliable response has been well documented in many studies, no one has previously used this stimulus to evaluate loudness growth functions. One octave-band chirp stimuli with center frequencies of 1000 Hz and 4000 Hz were chosen to evoke ABRs at 7 different stimulus intensities from 20 dB nHL to 80 dB nHL with 10 dB steps. In the psychoacoustic procedure, subjects were asked to rate the perceived loudness of each presented stimulus. The recorded ABR trials were averaged by a modified version of weighted averaging based on Bayesian inference. This method of averaging decreases the effects of non-stationary noise sources by calculating a number of locally-stationary noise sources based on a series of F-tests. The peak-to-trough amplitude of the most salient peak of the ABR at each intensity constituted the physiological loudness estimate. Linear and power functions relating the psychoacoustical results and the ABR measurements were compared. The obtained results were in good agreement with equal-loudness contours and estimated loudness from the loudness model for time-varying sounds of Glasberg, & Moore (2002). We concluded that loudness growth can be estimated with ABRs to frequency-specific chirp stimuli. / Thesis / Master of Science (MSc)
|
2 |
Aditivní kombinatorika a teorie čísel / Additive combinatorics and number theoryHančl, Jaroslav January 2020 (has links)
We present several results for growth functions of ideals of different com- binatorial structures. An ideal is a set downward closed under a containment relation, like the relation of subpartition for partitions, or the relation of induced subgraph for graphs etc. Its growth function (GF) counts elements of given size. For partition ideals we establish an asymptotics for GF of ideals that do not use parts from a finite set S and use this to construct ideal with highly oscillating GF. Then we present application characterising GF of particular partition ideals. We generalize ideals of ordered graphs to ordered uniform hypergraphs and show two dichotomies for their GF. The first result is a constant to linear jump for k-uniform hypergraphs. The second result establishes the polynomial to exponential jump for 3-uniform hypergraphs. That is, there are no ordered hypergraph ideals with GF strictly inside the constant-linear and polynomial- exponential range. We obtain in both dichotomies tight upper bounds. Finally, in a quite general setting we present several methods how to generate for various combinatorial structures pairs of sets defining two ideals with iden- tical GF. We call these pairs Wilf equivalent pairs and use the automorphism method and the replacement method to obtain such pairs. 1
|
3 |
Growth and Geodesics of Thompson's Group FSchofield, Jennifer L. 19 November 2009 (has links) (PDF)
In this paper our goal is to describe how to find the growth of Thompson's group F with generators a and b. Also, by studying elements through pipe systems, we describe how adding a third generator c affects geodesic length. We model the growth of Thompson's group F by producing a grammar for reduced pairs of trees based on Blake Fordham's tree structure. Then we change this grammar into a system of equations that describes the growth of Thompson's group F and simplify. To complete our second goal, we present and discuss a computer program that has led to some discoveries about how generators affect the pipe systems. We were able to find the growth function as a system of 11 equations for generators a and b.
|
4 |
Random parameters in learning: advantages and guaranteesEvzenie Coupkova (18396918) 22 April 2024 (has links)
<p dir="ltr">The generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen. We study a family of low-complexity classifiers consisting of thresholding a random one-dimensional feature. The feature is obtained by projecting the data on a random line after embedding it into a higher-dimensional space parametrized by monomials of order up to k. More specifically, the extended data is projected n-times and the best classifier among those n, based on its performance on training data, is chosen. </p><p dir="ltr">We show that this type of classifier is extremely flexible, as it is likely to approximate, to an arbitrary precision, any continuous function on a compact set as well as any Boolean function on a compact set that splits the support into measurable subsets. In particular, given full knowledge of the class conditional densities, the error of these low-complexity classifiers would converge to the optimal (Bayes) error as k and n go to infinity. On the other hand, if only a training dataset is given, we show that the classifiers will perfectly classify all the training points as k and n go to infinity. </p><p dir="ltr">We also bound the generalization error of our random classifiers. In general, our bounds are better than those for any classifier with VC dimension greater than O(ln(n)). In particular, our bounds imply that, unless the number of projections n is extremely large, there is a significant advantageous gap between the generalization error of the random projection approach and that of a linear classifier in the extended space. Asymptotically, as the number of samples approaches infinity, the gap persists for any such n. Thus, there is a potentially large gain in generalization properties by selecting parameters at random, rather than optimization. </p><p dir="ltr">Given a classification problem and a family of classifiers, the Rashomon ratio measures the proportion of classifiers that yield less than a given loss. Previous work has explored the advantage of a large Rashomon ratio in the case of a finite family of classifiers. Here we consider the more general case of an infinite family. We show that a large Rashomon ratio guarantees that choosing the classifier with the best empirical accuracy among a random subset of the family, which is likely to improve generalizability, will not increase the empirical loss too much. </p><p dir="ltr">We quantify the Rashomon ratio in two examples involving infinite classifier families in order to illustrate situations in which it is large. In the first example, we estimate the Rashomon ratio of the classification of normally distributed classes using an affine classifier. In the second, we obtain a lower bound for the Rashomon ratio of a classification problem with a modified Gram matrix when the classifier family consists of two-layer ReLU neural networks. In general, we show that the Rashomon ratio can be estimated using a training dataset along with random samples from the classifier family and we provide guarantees that such an estimation is close to the true value of the Rashomon ratio.</p>
|
Page generated in 0.0517 seconds