• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Variants and Generalization of Some Classical Problems in Combinatorial Geometry

Bharadwaj, Subramanya B V January 2014 (has links) (PDF)
In this thesis we consider extensions and generalizations of some classical problems in Combinatorial Geometry. Our work is an offshoot of four classical problems in Combinatorial Geometry. A fundamental assumption in these problems is that the underlying point set is R2. Two fundamental themes entwining the problems considered in this thesis are: “What happens if we assume that the underlying point set is finite?”, “What happens if we assume that the underlying point set has a special structure?”. Let P ⊂ R2 be a finite set of points in general position. It is reasonable to expect that if |P| is large then certain ‘patterns’ in P always occur. One of the first results was the Erd˝os-Szekeres Theorem which showed that there exists a f(n) such that if |P| ≥ f(n) then there exists a convex subset S ⊆ P, |S| = n i.e., a subset which is a convex polygon of size n. A considerable number of such results have been found since. Avis et al. in 2001 posed the following question which we call the n-interior point problem: Is there a finite integer g(n) for every n, such that, every point set P with g(n) interior points has a convex subset S ⊆ P with n interior points. i.e. a subset which is a convex polygon that contains exactly n interior points. They showed that g(1) = 1, g(2) = 4. While it is known that g(3) = 9, it is not known whether g(n) exists for n ≥ 4. In the first part of this thesis, we give a positive solution to the n-interior point problem for point sets with bounded number of convex layers. We say a family of geometric objects C in Rd has the (l, k)-property if every subfamily C′ ⊆ C of cardinality at most l is k-piercable. Danzer and Gr¨unbaum posed the following fundamental question which can be considered as a generalised version of Helly’s theorem: For every positive integer k, does there exist a finite g(k, d) such that if any family of convex objects C in Rd has the (g(k, d), k)-property, then C is k-piercable? Very few results(mostly negative) are known. Inspired by the original question of Danzer and Gr¨unbaum we consider their question in an abstract set theoretic setting. Let U be a set (possibly infinite). Let C be a family of subsets of U with the property that if C1, . . . ,Cp+1 ∈ C are p + 1 distinct subsets, then |C1 ∩ · · · ∩Cp+1| ≤ l. In the second part of this thesis, we show in this setting, the first general positive results for the Danzer Grunbaum problem. As an extension, we show polynomial sized kernels for hitting set and covering problems in our setting. In the third part of this thesis, we broadly look at hitting and covering questions with respect to points and families of geometric objects in Rd. Let P be a subset of points(possibly infinite) in Rd and C be a collection of subsets of P induced by objects of a given family. For the system (P, C), let νh be the packing number and νc the dual packing number. We consider the problem of bounding the transversal number τ h and the dual transversal number τ c in terms of νh and νc respectively. These problems has been well studied in the case when P = R2. We systematically look at the case when P is finite, showing bounds for intervals, halfspaces, orthants, unit squares, skylines, rectangles, halfspaces in R3 and pseudo disks. We show bounds for rectangles when P = R2. Given a point set P ⊆ Rd, a family of objects C and a real number 0 < ǫ < 1, the strong epsilon net problem is to find a minimum sized subset Q ⊆ P such that any object C ∈ C with the property that |P ∩C| ≥ ǫn is hit by Q. It is customary to express the bound on the size of the set Q in terms of ǫ. Let G be a uniform √n × √n grid. It is an intriguing question as to whether we get significantly better bounds for ǫ-nets if we restrict the underlying point set to be the grid G. In the last part of this thesis we consider the strong epsilon net problem for families of geometric objects like lines and generalized parallelograms, when the underlying point set is the grid G. We also introduce the problem of finding ǫ-nets for arithmetic progressions and give some preliminary bounds.
2

Random parameters in learning: advantages and guarantees

Evzenie Coupkova (18396918) 22 April 2024 (has links)
<p dir="ltr">The generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen. We study a family of low-complexity classifiers consisting of thresholding a random one-dimensional feature. The feature is obtained by projecting the data on a random line after embedding it into a higher-dimensional space parametrized by monomials of order up to k. More specifically, the extended data is projected n-times and the best classifier among those n, based on its performance on training data, is chosen. </p><p dir="ltr">We show that this type of classifier is extremely flexible, as it is likely to approximate, to an arbitrary precision, any continuous function on a compact set as well as any Boolean function on a compact set that splits the support into measurable subsets. In particular, given full knowledge of the class conditional densities, the error of these low-complexity classifiers would converge to the optimal (Bayes) error as k and n go to infinity. On the other hand, if only a training dataset is given, we show that the classifiers will perfectly classify all the training points as k and n go to infinity. </p><p dir="ltr">We also bound the generalization error of our random classifiers. In general, our bounds are better than those for any classifier with VC dimension greater than O(ln(n)). In particular, our bounds imply that, unless the number of projections n is extremely large, there is a significant advantageous gap between the generalization error of the random projection approach and that of a linear classifier in the extended space. Asymptotically, as the number of samples approaches infinity, the gap persists for any such n. Thus, there is a potentially large gain in generalization properties by selecting parameters at random, rather than optimization. </p><p dir="ltr">Given a classification problem and a family of classifiers, the Rashomon ratio measures the proportion of classifiers that yield less than a given loss. Previous work has explored the advantage of a large Rashomon ratio in the case of a finite family of classifiers. Here we consider the more general case of an infinite family. We show that a large Rashomon ratio guarantees that choosing the classifier with the best empirical accuracy among a random subset of the family, which is likely to improve generalizability, will not increase the empirical loss too much. </p><p dir="ltr">We quantify the Rashomon ratio in two examples involving infinite classifier families in order to illustrate situations in which it is large. In the first example, we estimate the Rashomon ratio of the classification of normally distributed classes using an affine classifier. In the second, we obtain a lower bound for the Rashomon ratio of a classification problem with a modified Gram matrix when the classifier family consists of two-layer ReLU neural networks. In general, we show that the Rashomon ratio can be estimated using a training dataset along with random samples from the classifier family and we provide guarantees that such an estimation is close to the true value of the Rashomon ratio.</p>

Page generated in 0.0269 seconds