The ideal estimation method needs to fulfill three requirements: (i) efficient computation, (ii) statistical efficiency, and (iii) numerical stability. The classical stochastic approximation of (Robbins, 1951) is an iterative estimation method, where the current iterate (parameter estimate) is updated according to some discrepancy between what is observed and what is expected assuming the current iterate has the true parameter value. Classical stochastic approximation undoubtedly meets the computation requirement, which explains its widespread popularity, for example, in modern applications of machine learning with large data sets, but cannot effectively combine it with efficiency and stability. Surprisingly, the stability issue can be improved substantially, if the aforementioned discrepancy is computed not using the current iterate, but using the conditional expectation of the next iterate given the current one. The computational overhead of the resulting implicit update is minimal for many statistical models, whereas statistical efficiency can be achieved through simple averaging of the iterates, as in classical stochastic approximation (Ruppert, 1988). Thus, implicit stochastic approximation is fast and principled, fulfills requirements (i-iii) for a number of popular statistical models including generalized linear models, M-estimation, and proportional hazards, and it is poised to become the workhorse of estimation with large data sets in statistical practice. / Statistics
Identifer | oai:union.ndltd.org:harvard.edu/oai:dash.harvard.edu:1/33493434 |
Date | 25 July 2017 |
Creators | Toulis, Panagiotis |
Contributors | Airoldi, Edoardo M., Parkes, David C., Rubin, Donald B. |
Publisher | Harvard University |
Source Sets | Harvard University |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation, text |
Format | application/pdf |
Rights | open |
Page generated in 0.0047 seconds