The goal of learning transfer is to apply knowledge gained from one problem to a separate related problem. Transformation learning is a proposed approach to computational learning transfer that focuses on modeling high-level transformations that are well suited for transfer. By using a high-level representation of transferable data, transformation learning facilitates both shallow transfer (intra-domain) and deep transfer (inter-domain) scenarios. Transformations can be discovered in data using manifold learning to order data instances according to the transformations they represent. For high-dimensional data representable with coordinate systems, such as images and sounds, data instances can be decomposed into small sub-instances based on coordinates. Coordinate-based transformation models trained using these sub-instances can effectively approximate transformations from very small amounts of input data compared to the naive transformation modeling approach. In addition, these models are well suited for deep transfer scenarios.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-3333 |
Date | 25 May 2010 |
Creators | Wilson, Christopher R. |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | http://lib.byu.edu/about/copyright/ |
Page generated in 0.0014 seconds