• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Some topics in dimension reduction and clustering

Zhao, Jianhua, 赵建华 January 2009 (has links)
published_or_final_version / Statistics and Actuarial Science / Doctoral / Doctor of Philosophy
2

On Some Problems In Transfer Learning

Galbraith, Nicholas R. January 2024 (has links)
This thesis consists of studies of two important problems in transfer learning: binary classification under covariate-shift transfer, and off-policy evaluation in reinforcement learning. First, the problem of binary classification under covariate shift is considered, for which the first efficient procedure for optimal pruning of a dyadic classification tree is presented, where optimality is derived with respect to a notion of 𝒂𝒗𝒆𝒓𝒂𝒈𝒆 𝒅𝒊𝒔𝒄𝒓𝒆𝒑𝒂𝒏𝒄𝒚 between the shifted marginal distributions of source and target. Further, it is demonstrated that the procedure is adaptive to the discrepancy between marginal distributions in a neighbourhood of the decision boundary. It is shown how this notion of average discrepancy can be viewed as a measure of 𝒓𝒆𝒍𝒂𝒕𝒊𝒗𝒆 𝒅𝒊𝒎𝒆𝒏𝒔𝒊𝒐𝒏 between distributions, as it relates to existing notions of information such as the Minkowski and Renyi dimensions. Experiments are carried out on real data to verify the efficacy of the pruning procedure as compared to other baseline methods for pruning under transfer. The problem of off-policy evaluation for reinforcement learning is then considered, where two minimax lower bounds for the mean-square error of off-policy evaluation under Markov decision processes are derived. The first of these gives a non-asymptotic lower bound for OPE in finite state and action spaces over a model in which the mean reward is perturbed arbitrarily (up to a given magnitude) that depends on an average weighted chi-square divergence between the behaviour and target policies. The second provides an asymptotic lower bound for OPE in continuous state-space when the mean reward and policy ratio functions lie in a certain smoothness class. Finally, the results of a study that purported to have derived a policy for sepsis treatment in ICUs are replicated and shown to suffer from excessive variance and therefore to be unreliable; our lower bound is computed and used as evidence that reliable off-policy estimation from this data would have required a great deal more samples than were available.

Page generated in 0.125 seconds