• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Learning with Staleness

Dai, Wei 01 March 2018 (has links)
A fundamental assumption behind most machine learning (ML) algorithms and analyses is the sequential execution. That is, any update to the ML model can be immediately applied and the new model is always available for the next algorithmic step. This basic assumption, however, can be costly to realize, when the computation is carried out across multiple machines, linked by commodity networks that are usually 104 times slower than the memory speed due to fundamental hardware limitations. As a result, concurrent ML computation in the distributed settings often needs to handle delayed updates and perform learning in the presence of staleness. This thesis characterizes learning with staleness from three directions: (1) We extend the theoretical analyses of a number of classical ML algorithms, including stochastic gradient descent, proximal gradient descent on non-convex problems, and Frank-Wolfe algorithms, to explicitly incorporate staleness into their convergence characterizations. (2)We conduct simulation and large-scale distributed experiments to study the empirical effects of staleness on ML algorithms under indeterministic executions. Our results reveal that staleness is a key parameter governing the convergence speed for all considered ML algorithms, with varied ramifications. (3) We design staleness-minimizing parameter server systems by optimizing synchronization methods to effectively reduce the runtime staleness. The proposed optimization of a bounded consistency model utilizes the additional network bandwidths to communicate updates eagerly, relieving users of the burden to tune the staleness level. By minimizing staleness at the framework level, our system stabilizes diverging optimization paths and substantially accelerates convergence across ML algorithms without any modification to the ML programs.
2

U-Net ship detection in satellite optical imagery

Smith, Benjamin 05 October 2020 (has links)
Deep learning ship detection in satellite optical imagery suffers from false positive occurrences with clouds, landmasses, and man-made objects that interfere with correctly classifying ships. A custom U-Net is implemented to challenge this issue and aims to capture more features in order to provide a more accurate class accuracy. This model is trained with two different systematic architectures: single node architecture and a parameter server variant whose workers act as a boosting mechanism. To ex-tend this effort, a refining method of offline hard example mining aims to improve the accuracy of the trained models in both the validation and target datasets however it results in over correction and a decrease in accuracy. The single node architecture results in 92% class accuracy over the validation dataset and 68% over the target dataset. This exceeds class accuracy scores in related works which reached up to 88%. A parameter server variant results in class accuracy of 86% over the validation set and 73% over the target dataset. The custom U-Net is able to achieve acceptable and high class accuracy on a subset of training data keeping training time and cost low in cloud based solutions. / Graduate

Page generated in 0.0914 seconds