Return to search

Algorithms for large-scale personalization

Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 191-205). / The term personalization typically refers to the activity of online recommender systems, and while product and content personalization is now ubiquitous in e-commerce, systems today remain relatively primitive: they are built on a small fraction of available data, run with heuristic algorithms, and restricted to e-commerce applications. This thesis addresses key challenges and new applications for modern, large-scale personalization. In particular, this thesis is outlined as follows: First, we formulate a generic, flexible framework for learning from matrix-valued data, including the kinds of data commonly collected in e-commerce. Underlying this framework is a classic de-noising problem called tensor recovery, for which we provide an efficient algorithm, called Slice Learning, that is practical for massive datasets. Further, we establish near-optimal recovery guarantees that represent an order improvement over the best available results for this problem. Experimental results from a music recommendation platform are shown. Second, we apply this de-noising framework to new applications in precision medicine where data are routinely complex and in high dimensions. We describe a simple, accurate proteomic blood test (a 'liquid biopsy') for cancer detection that relies on de-noising via the Slice Learning algorithm. Experiments on plasma from healthy patients that were later diagnosed with cancer demonstrate that our test achieves diagnostically significant sensitivities and specificities for many types of cancers in their earliest stages. Third, we present an efficient, principled approach to operationalizing recommendations, i.e. the decision of exactly what items to recommend. Motivated by settings such as online advertising where the space of items is massive and recommendations must be made in milliseconds, we propose an algorithm that simultaneously achieves two important properties: (1) sublinear runtime and (2) a constant-factor guarantee under a wide class of choice models. Our algorithm relies on a new sublinear time sampling scheme, which we develop to solve a class of problems that subsumes the classic nearest neighbor problem. Results from a massive online content recommendation firm are given. Fourth, we address the problem of cost-effectively executing a broad class of computations on commercial cloud computing platforms, including the computations typically done in personalization. We formulate this as a resource allocation problem and introduce a new approach to modeling uncertainty - the Data-Driven Prophet Model - that treads the line between stochastic and adversarial modeling, and is amenable to the common situation where stochastic modeling is challenging, despite the availability of copious historical data. We propose a simple, scalable algorithm that is shown to be order-optimal in this setting. Results from experiments on a commercial cloud platform are shown. / by Andrew A. Li. / Ph. D.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/119351
Date January 2018
CreatorsLi, Andrew A. (Andrew Andi)
ContributorsVivek F. Farias., Massachusetts Institute of Technology. Operations Research Center., Massachusetts Institute of Technology. Operations Research Center.
PublisherMassachusetts Institute of Technology
Source SetsM.I.T. Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Format205 pages, application/pdf
RightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission., http://dspace.mit.edu/handle/1721.1/7582

Page generated in 0.0015 seconds