• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis of networks with dynamic topologies

Moose, Robert Lewis January 1987 (has links)
Dynamic hierarchical networks represent an architectural strategy for employing adaptive behavior in applications sensitive to highly variable external demands or uncertain internal conditions. The characteristics of such architectures are described, and the significance of adaptive capability is discussed. The necessity for assessing cost/benefit tradeoffs leads to the use of queueing network models. The general model, a network of M/M/1 queues in a random environment, is introduced and then is simplified so that the links may be treated as isolated M/M/1 queues in a random environment. This treatment yields a formula for approximate mean network delay by combining matrix-geometric results (mean queue length and mean delay) for the individual links. Conditions under which the analytic model is considered valid are identified through comparison with a discrete event simulation model. Last, performance of the dynamic hierarchy is compared with that of the static hierarchy. This comparison establishes conditions for which the dynamic architecture enables performance equal or nearly equal to performance of the static architecture. / Ph. D. / incomplete_metadata
2

<b>PROCESSING IN MEMORY DESIGN AND OPTIMIZATIONS FOR MACHINE LEARNING INFERENCE</b>

Mingxuan He (19759866) 22 October 2024 (has links)
<p dir="ltr">Advances in machine learning (ML) have ignited hardware innovations for efficient execution of the ML models many of which are memory-bound (e.g., long short-term memories, multi-level perceptrons, and recurrent neural networks). Specifically, inference using these ML models with small batches, as would be the case at the Cloud edge, has little reuse of the large filters and is deeply memory-bound. Simultaneously, processing-in or -near memory (PIM or PNM) is promising unprecedented highbandwidth connection between compute and memory. Fortunately, the memory-bound ML models are a good fit for PIM. We focus on digital PIM which provides higher bandwidth than PNM and does not incur the reliability issues of analog PIM. Previous PIM and PNM approaches advocate full processor cores which do not conform to PIM’s severe area and power constraints. This thesis is composed of three major projects: Newton, activation folding (AcF) and ESPIM. Newton is Sk Hynix’s first accelerator-inmemory (AiMX) product for machine learning, AcF improves the performance of Newton by achieving more compute-row access overlap and ESPIM incorporate sparse neural network models to PIM</p>

Page generated in 0.0798 seconds