• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2799
  • 275
  • 199
  • 187
  • 158
  • 72
  • 46
  • 29
  • 25
  • 18
  • 16
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4760
  • 2758
  • 1225
  • 1035
  • 1013
  • 761
  • 707
  • 700
  • 516
  • 508
  • 498
  • 468
  • 440
  • 433
  • 428
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Effects of posture, stabilization and depth on the cardiopulmonary response to underwater arm exercise

Daskalovic, Ivan Yochanan. January 1977 (has links)
Thesis (M.S.)--Wisconsin. / Includes bibliographical references (leaves 86-101).
2

A feasibility study of utilizing potassium superoxide in closed circuit underwater breathing

Carryer, J. Edward. January 1978 (has links)
Thesis (M.S.)--Wisconsin. / Includes bibliographical references (leaves 103-104).
3

An investigation of the feasibility of artificial gill systems for divers

Buckley, Robert L. January 1975 (has links)
Thesis (M.S.)--University of Wisconsin--Madison, 1975. / Typescript eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 114-116)
4

Design considerations for a supporting platform and an emergency escape capsule for working divers

Kassem, Essam Hussein, January 1977 (has links)
Thesis--Wisconsin. / Vita. Includes bibliographical references (leaves 353-359).
5

The U.S. Atlantic commercial fishing industry and cold water coral conservation history, current trends and next steps /

Williams, Lindsey C. January 2009 (has links)
Thesis (M.M.P.)--University of Delaware, 2009. / Principal faculty advisor: Jeremy M. Firestone, College of Marine & Earth Studies. Includes bibliographical references.
6

HARDWARE-AWARE EFFICIENT AND ROBUST DEEP LEARNING

Sarada Krithivasan (14276069) 20 December 2022 (has links)
<p>Deep Neural Networks (DNNs) have greatly advanced several domains of machine learning including image, speech and natural language processing, leading to their usage in several real-world products and services. This success has been enabled by improvements in hardware platforms such as Graphics Processing Units (GPUs) and specialized accelerators. However, recent trends in state-of-the-art DNNs point to enormous increases in compute requirements during training and inference that far surpass the rate of advancements in deep learning hardware. For example, image-recognition DNNs require tens to hundreds of millions of parameters for reaching competitive accuracies on complex datasets, resulting in billions of operations performed when processing a single input. Furthermore, this growth in model complexity is supplemented by an increase in the training dataset size to achieve improved classification performance, with complex datasets often containing millions of training samples or more. Another challenge hindering the adoption of DNNs is their susceptibility to adversarial attacks. Recent research has demonstrated that DNNs are vulnerable to imperceptible, carefully-crafted input perturbations that can lead to severe consequences in safety-critical applications such as autonomous navigation and healthcare.</p> <p><br></p> <p>This thesis proposes techniques to improve the execution efficiency of DNNs during both inference and training. In the context of DNN training, we first consider the widely-used stochastic gradient descent (SGD) algorithm. We propose a method to use localized learning, which is computationally cheaper and incurs lower memory footprint, to accelerate a SGD-based training framework with minimal impact on accuracy. This is achieved by employing localized learning in a spatio-temporally selective manner, i.e., in selected network layers and epochs. Next, we address training dataset complexity by leveraging input mixing operators that combine multiple training inputs into a single composite input. To ensure that training on the mixed inputs is effective, we propose techniques to reduce the interference between the constituent samples in a mixed input. Furthermore, we also design metrics to identify training inputs that are amenable to mixing, and apply mixing only to these inputs. Moving on to inference, we explore DNN ensembles, where the output of multiple DNN models are combined to form the prediction for a particular input. While ensembles achieve improved classification performance compared to single (i.e., non-ensemble) models, their compute and storage costs scale with the number of models in the ensemble. To that end, we propose a novel ensemble strategy wherein the ensemble members share the same weights for the convolutional and fully-connected layers, but differ in the additive biases applied after every layer. This allows for ensemble inference to be treated like batch inference, with the associated computational efficiency benefits. We also propose techniques to train these ensembles with limited overheads. Finally, we consider spiking neural networks (SNNs), a class of biologically-inspired neural networks that represent and process information as discrete spikes. Motivated by the observation that the dominant fraction of energy consumption in SNN hardware is within the memory and interconnect network, we propose a novel spike-bundling strategy that reduces energy consumption by communicating temporally proximal spikes as a single event.</p> <p><br></p> <p>As a second direction, the thesis identifies a new challenge in the field of adversarial machine learning. In contrast to prior attacks which degrade accuracy, we propose attacks that degrade the execution efficiency (energy and time) of a DNN on a given hardware platform. As one specific embodiment of such attacks, we propose sparsity attacks, which perturb the inputs to a DNN so as to result in reduced sparsity within the network, causing it’s latency and energy to increase on sparsity-optimized platforms. We also extend these attacks to SNNs, which are known rely on sparsity of spikes for efficiency, and demonstrate that it is possible to greatly degrade latency and energy of these networks through adversarial input perturbations.</p> <p><br></p> <p>In summary, this dissertation demonstrates approaches for efficient deep learning for inference and training, while also opening up new classes of attacks that must be addressed.</p> <p><br></p>
7

Measurement and QCD analysis of the proton structure function F←2 from the 1994 HERA data using the ZEUS detector

Quadt, Arnulf January 1996 (has links)
No description available.
8

FPGA acceleration of CNN training

Samal, Kruttidipta 07 January 2016 (has links)
This thesis presents the results of an architectural study on the design of FPGA- based architectures for convolutional neural networks (CNNs). We have analyzed the memory access patterns of a Convolutional Neural Network (one of the biggest networks in the family of deep learning algorithms) by creating a trace of a well-known CNN architecture and by developing a trace-driven DRAM simulator. The simulator uses the traces to analyze the effect that different storage patterns and dissonance in speed between memory and processing element, can have on the CNN system. This insight is then used create an initial design for a layer architecture for the CNN using an FPGA platform. The FPGA is designed to have multiple parallel-executing units. We design a data layout for the on-chip memory of an FPGA such that we can increase parallelism in the design. As the number of these parallel units (and hence parallelism) depends on the memory layout of input and output, particularly if parallel read and write accesses can be scheduled or not. The on-chip memory layout minimizes access contention during the operation of parallel units. The result is an SoC (System on Chip) that acts as an accelerator and can have more number of parallel units than previous work. The improvement in design was also observed by comparing post synthesis loop latency tables between our design and one with a single unit design. This initial design can help in designing FPGAs targeted for deep learning algorithms that can compete with GPUs in terms of performance.
9

Winter surface water mass modification in the Greenland Sea

Brandon, Mark Alan January 1995 (has links)
No description available.
10

On the relationship between deep circulation and a dynamical tracer over the global ocean

Day, Kate January 2001 (has links)
No description available.

Page generated in 0.0417 seconds