• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Forecasting Parameter of Kailashtilla Gas Processing Plant Using Neural Network

Kundu, S., Hasan, A., Sowgath, Md Tanvir 22 December 2012 (has links)
No / Neural Network (NN) is widely used in all aspects of process engineering activities, such as modeling, design, optimization and control. In this paper work, in absence of real plant data, simulated data (such as sales gas flow rate, pressure, raw gases flow rates and input heat flow associated with a heater used after dehydration) from a detailed model of Kailashtilla gas processing plant (KGP) within HYSYS is used to develop NN based model. Thereafter NN based model is trained and validated from HYSYS simulator generated data and that framework can predict the output data (sales gas flow rate and pressure) very closely with the simulated HYSYS plant data. The preliminary results show that the NN based correlation is adequately able to model and generate workable profiles for the process.
2

Importance sampling in deep learning : A broad investigation on importance sampling performance

Johansson, Mathias, Lindberg, Emma January 2022 (has links)
Available computing resources play a large part in enabling the training of modern deep neural networks to complete complex computer vision tasks. Improving the efficiency with which this computational power is utilized is highly important for enterprises to improve their networks rapidly. The first few training iterations over the data set often result in substantial gradients from seeing the samples and quick improvements in the network. At later stages, most of the training time is spent on samples that produce tiny gradient updates and are already properly handled. To make neural network training more efficient, researchers have used methods that give more attention to the samples that still produce relatively large gradient updates for the network. The methods used are called ''Importance Sampling''. When used, it reduces the variance in sampling and concentrates the training on the more informative examples. This thesis contributes to the studies on importance sampling by investigating its effectiveness in different contexts. In comparison to other studies, we more extensively examine image classification by exploring different network architectures over a wide range of parameter counts. Similar to earlier studies, we apply several ways of doing importance sampling across several datasets. While most previous research on importance sampling strategies applies it to image classification, our research aims at generalizing the results by applying it to object detection problems on top of image classification. Our research on image classification tasks conclusively suggests that importance sampling can speed up the training of deep neural networks. When performance in convergence is the vital metric, our importance sampling methods show mixed results. For the object detection tasks, preliminary experiments have been conducted. However, the findings lack enough data to demonstrate the effectiveness of importance sampling in object detection conclusively.

Page generated in 0.0811 seconds