• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 16
  • 12
  • 10
  • 10
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Engineering-driven Machine Learning Methods for System Intelligence

Wang, Yinan 19 May 2022 (has links)
Smart manufacturing is a revolutionary domain integrating advanced sensing technology, machine learning methods, and the industrial internet of things (IIoT). The development of sensing technology provides large amounts and various types of data (e.g., profile, image, point cloud, etc.) to describe each stage of a manufacturing process. The machine learning methods have the advantages of efficiently and effectively processing and fusing large-scale datasets and demonstrating outstanding performance in different tasks (e.g., diagnosis, monitoring, etc.). Despite the advantages of incorporating machine learning methods into smart manufacturing, there are some widely existing concerns in practice: (1) Most of the edge devices in the manufacturing system only have limited memory space and computational capacity; (2) Both the performance and interpretability of the data analytics method are desired; (3) The connection to the internet exposes the manufacturing system to cyberattacks, which decays the trustiness of data, models, and results. To address these limitations, this dissertation proposed systematic engineering-driven machine learning methods to improve the system intelligence for smart manufacturing. The contributions of this dissertation can be summarized in three aspects. First, tensor decomposition is incorporated to approximately compress the convolutional (Conv) layer in Deep Neural Network (DNN), and a novel layer is proposed accordingly. Compared with the Conv layer, the proposed layer significantly reduces the number of parameters and computational costs without decaying the performance. Second, a physics-informed stochastic surrogate model is proposed by incorporating the idea of building and solving differential equations into designing the stochastic process. The proposed method outperforms pure data-driven stochastic surrogates in recovering system patterns from noised data points and exploiting limited training samples to make accurate predictions and conduct uncertainty quantification. Third, a Wasserstein-based out-of-distribution detection (WOOD) framework is proposed to strengthen the DNN-based classifier with the ability to detect adversarial samples. The properties of the proposed framework have been thoroughly discussed. The statistical learning bound of the proposed loss function is theoretically investigated. The proposed framework is generally applicable to DNN-based classifiers and outperforms state-of-the-art benchmarks in identifying out-of-distribution samples. / Doctor of Philosophy / The global industries are experiencing the fourth industrial revolution, which is characterized by the use of advanced sensing technology, big data analytics, and the industrial internet of things (IIoT) to build a smart manufacturing system. The massive amount of data collected in the engineering process provides rich information to describe the complex physical phenomena in the manufacturing system. The big data analytics methods (e.g., machine learning, deep learning, etc.) are developed to exploit the collected data to complete specific tasks, such as checking the quality of the product, diagnosing the root cause of defects, etc. Given the outstanding performances of the big data analytics methods in these tasks, there are some concerns arising from the engineering practice, such as the limited available computational resources, the model's lack of interpretability, and the threat of hacking attacks. In this dissertation, we propose systematic engineering-driven machine learning methods to address or mitigate these widely existing concerns. First, the model compression technique is developed to reduce the number of parameters and computational complexity of the deep learning model to fit the limited available computational resources. Second, physics principles are incorporated into designing the regression method to improve its interpretability and enable it better explore the properties of the data collected from the manufacturing system. Third, the cyberattack detection method is developed to strengthen the smart manufacturing system with the ability to detect potential hacking threats.
12

Constitutive modelling of municipal solid waste

Zhang, Bo January 2007 (has links)
Design of landfills must consider both stability and integrity of the lining system. Therefore, stresses and strains in both mineral and geosynthetic lining materials must be controlled. Interaction between waste and barrier system is of particular importance for assessing the stability and structural integrity of steep non-self supporting barrier systems. The most appropriate approach to assess the interaction is the use of numerical modelling techniques, and therefore an appropriate constitutive model for waste material is required to represent its mechanical behaviour. In a literature review the key aspects of mechanical behaviour of municipal solid waste (MSW) were investigated, including the influence of compressible and reinforcing particles on compression and shear behaviour of MSW were identified. Constitutive modelling of both MSW and soil material were reviewed, based on which the methodology for this study have been developed. In addition, requirements of an appropriate constitutive model for MSW have been suggested from the numerical modelling experience, and a framework to develop a constitutive model for MSW was produced. A one-dimensional compression model was developed by including the influence of compressible particles on MSW compression behaviour. One-dimensional compression tests on both real and synthetic waste samples were modelled and the results have shown that the compression model can reproduce the measured behaviour. A fibre reinforcing model was developed by including the influence of reinforcing particles on MSW shear behaviour. A triaxial compression test on fibre reinforced sand was modelled and the results have shown that the reinforcing model can predict its shear strength. A constitutive model for MSW has been developed by combining the Modified Cam-Clay with the one-dimensional compression and the fibre reinforcing models. Typical MSW triaxial compression tests have been modelled and the results have shown that the MSW model can reproduce the stress-strain behaviour in specific strain ranges. The constitutive model for MSW has been coded into a non-linear elasto-plastic finite element method program. Comparisons between the finite element analysis results and the analytical solutions have been performed and good agreements have been obtained.
13

Enhancing Efficiency and Trustworthiness of Deep Learning Algorithms

Isha Garg (15341896) 24 April 2023 (has links)
<p>This dissertation explore two major goals in Deep Learning algorithm design: efficiency and trustworthiness. We motivate these concerns in Chapter 1 and give relevant background in Chapter 2. We then discuss six works to target these two goals. </p> <p>The first of these discusses how to make the model compression methodology more efficient, so it can be done in a single shot. This allows us to create models with reduced size and layers, so we can have faster and more efficient inference, and is covered in Chapter 3. We then extend this to target efficiency in continual learning in Chapter 4, while mitigating the problem of catastrophic forgetting. The method discussed also allows us to circumvent the potential for data leakage by avoiding the need to store any data from the past tasks. Next, we consider brain-inspired computing as an alternative to traditional neural networks to improve compute efficiency of networks. The spiking neural networks discussed however have large inference latency due to the need for accumulating spikes over many timesteps. We tackle this by introducing a new scheme that distributes an image over time by breaking it down into a sum of its ranked sinusoidal bases in Chapter 5. This results in networks that are faster and more efficient to deploy. Chapter 6 targets mitigating both the communication expense and potential for data leakage in federated learning, by distilling the gradients to be communicated in a small number of images that resemble noise. Communicating these images is more efficient, and circumvents the potential for data leakage as they resemble noise. We then explore the applications of studying curvature of loss with respect to input data points in the last two chapters. We first utilize curvature to create performant coresets to reduce the size of datasets, to make training more efficient in Chapter 7. In Chapter 8, we use curvature as a metric for overfitting and use it to expose dataset integrity issues arising from memorization.</p>
14

Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment

Akash Gaikwad (5931047) 17 January 2019 (has links)
<p>In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems.</p> <p>This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. </p> <p>This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model.</p> <p>1: Pruning based on Taylor expansion of change in cost function Delta C.</p> <p>2: Pruning based on L<sub>2</sub> normalization of activation maps.</p> <p>3: Pruning based on a combination of method 1 and method 2.</p><p>The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L<sub>2</sub> normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.</p><p></p>
15

Representation and Efficient Computation of Sparse Matrix for Neural Networks in Customized Hardware

Yan, Lihao January 2022 (has links)
Deep Neural Networks are widely applied to various kinds of fields nowadays. However, hundreds of thousands of neurons in each layer result in intensive memory storage requirement and a massive number of operations, making it difficult to employ deep neural networks on mobile devices where the hardware resources are limited. One common technique to address the memory limitation is to prune and quantize the neural networks. Besides, due to the frequent usage of Rectified Linear Unit (ReLU) function or network pruning, majority of the data in the weight matrices will be zeros, which will not only take up a large amount of memory space but also cause unnecessary computation operations. In this thesis, a new value-based compression method is put forward to represent sparse matrix more efficiently by eliminating these zero elements, and a customized hardware is implemented to realize the decompression and computation operations. The value-based compression method is aimed to replace the nonzero data in each column of the weight matrix with a reference value (arithmetic mean) and the relative differences between each nonzero element and the reference value. Intuitively, the data stored in each column is likely to contain similar values. Therefore, the differences will have a narrow range, and fewer bits rather than the full form will be sufficient to represent all the differences. In this way, the weight matrix can be further compressed to save memory space. The proposed value-based compression method reduces the memory storage requirement for the fully-connected layers of AlexNet to 37%, 41%, 47% and 68% of the compressed model, e.g., the Compressed Sparse Column (CSC) format, when the data size is set to 8 bits and the sparsity is 20%, 40%, 60% and 80% respectively. In the meanwhile, 41%, 53% and 63% compression rates of the fully-connected layers of the compressed AlexNet model with respect to 8-bit, 16-bit and 32-bit data are achieved when the sparsity is 40%. Similar results are obtained for VGG16 experiment. / Djupa neurala nätverk används i stor utsträckning inom olika fält nuförtiden. Emellertid ställer hundratusentals neuroner per lager krav på intensiv minneslagring och ett stort antal operationer, vilket gör det svårt att använda djupa neurala nätverk på mobila enheter där hårdvaruresurserna är begränsade. En vanlig teknik för att hantera minnesbegränsningen är att beskära och kvantifiera de neurala nätverken. På grund av den frekventa användningen av Rectified Linear Unit (ReLU) -funktionen eller nätverksbeskärning kommer majoriteten av datat i viktmatriserna att vara nollor, vilket inte bara tar upp mycket minnesutrymme utan också orsakar onödiga beräkningsoperationer. I denna avhandling presenteras en ny värdebaserad komprimeringsmetod för att representera den glesa matrisen mer effektivt genom att eliminera dessa nollelement, och en anpassad hårdvara implementeras för att realisera dekompressions- och beräkningsoperationerna. Den värdebaserade komprimeringsmetoden syftar till att ersätta icke-nolldata i varje kolumn i viktmatrisen med ett referensvärde (aritmetiskt medelvärde) och de relativa skillnaderna mellan varje icke-nollelement och referensvärdet. Intuitivt kommer data som lagras i varje kolumn sannolikt att innehålla liknande värden. Därför kommer skillnaderna att ha ett smalt intervall, och färre bitar snarare än den fullständiga formen kommer att räcka för att representera alla skillnader. På så sätt kan viktmatrisen komprimeras ytterligare för att spara minnesutrymme. Den föreslagna värdebaserade komprimeringsmetoden minskar minneslagringskravet för de helt anslutna lagren av AlexNet till 37%, 41%, 47% och 68% av den komprimerade modellen, t.ex. Compressed Sparse Column (CSC) format, när datastorleken är inställd på 8 bitar och sparsiteten är 20%, 40%, 60% respektive 80%. Under tiden uppnås 41%, 53% och 63% komprimeringshastigheter för de helt anslutna lagren i den komprimerade AlexNet-modellen med avseende på 8- bitars, 16-bitars och 32-bitars data när sparsiteten är 40%. Liknande resultat erhålls för VGG16-experiment.
16

Efficient Edge Intelligence In the Era of Big Data

Jun Hua Wong (11013474) 05 August 2021 (has links)
Smart wearables, known as emerging paradigms for vital big data capturing, have been attracting intensive attentions. However, one crucial problem is their power-hungriness, i.e., the continuous data streaming consumes energy dramatically and requires devices to be frequently charged. Targeting this obstacle, we propose to investigate the biodynamic patterns in the data and design a data-driven approach for intelligent data compression. We leverage Deep Learning (DL), more specifically, Convolutional Autoencoder (CAE), to learn a sparse representation of the vital big data. The minimized energy need, even taking into consideration the CAE-induced overhead, is tremendously lower than the original energy need. Further, compared with state-of-the-art wavelet compression-based method, our method can compress the data with a dramatically lower error for a similar energy budget. Our experiments and the validated approach are expected to boost the energy efficiency of wearables, and thus greatly advance ubiquitous big data applications in era of smart health.<br><div>In recent years, there has also been a growing interest in edge intelligence for emerging instantaneous big data inference. However, the inference algorithms, especially deep learning, usually require heavy computation requirements, thereby greatly limiting their deployment on the edge. We take special interest in the smart health wearable big data mining and inference. <br></div><div><br></div><div>Targeting the deep learning’s high computational complexity and large memory and energy requirements, new approaches are urged to make the deep learning algorithms ultra-efficient for wearable big data analysis. We propose to leverage knowledge distillation to achieve an ultra-efficient edge-deployable deep learning model. More specifically, through transferring the knowledge from a teacher model to the on-edge student model, the soft target distribution of the teacher model can be effectively learned by the student model. Besides, we propose to further introduce adversarial robustness to the student model, by stimulating the student model to correctly identify inputs that have adversarial perturbation. Experiments demonstrate that the knowledge distillation student model has comparable performance to the heavy teacher model but owns a substantially smaller model size. With adversarial learning, the student model has effectively preserved its robustness. In such a way, we have demonstrated the framework with knowledge distillation and adversarial learning can, not only advance ultra-efficient edge inference, but also preserve the robustness facing the perturbed input.</div>
17

Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment

Gaikwad, Akash S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
18

Convolutional and recurrent neural networks for real-time speech separation in the complex domain

Tan, Ke 16 September 2021 (has links)
No description available.
19

Towards Green AI: Cost-Efficient Deep Learning using Domain Knowledge

Srivastava, Sangeeta 12 August 2022 (has links)
No description available.
20

Distributed Intelligence for Multi-Robot Environment : Model Compression for Mobile Devices with Constrained Computing Resources / Distribuerad intelligens för multirobotmiljö : Modellkomprimering för mobila enheter med begränsade datorresurser

Souroulla, Timotheos January 2021 (has links)
Human-Robot Collaboration (HRC), where both humans and robots work in the same environment simultaneously, is an emerging field and has increased massively during the past decade. For this collaboration to be feasible and safe, robots need to perform a proper safety analysis to avoid hazardous situations. This safety analysis procedure involves complex computer vision tasks that require a lot of processing power. Therefore, robots with constrained computing resources cannot execute these tasks without any delays, thus for executing these tasks they rely on edge infrastructures, such as remote computational resources accessible over wireless communication. In some cases though, the edge may be unavailable, or connection to it may not be possible. In such cases, robots still have to navigate themselves around the environment, while maintaining high levels of safety. This thesis project focuses on reducing the complexity and the total number of parameters of pre-trained computer vision models by using model compression techniques, such as pruning and knowledge distillation. These model compression techniques have strong theoretical and practical foundations, but work on their combination is limited, therefore it is investigated in this work. The results of this thesis project show that in the test cases, up to 90% of the total number of parameters of a computer vision model can be removed without any considerable reduction in the model’s accuracy. / Människa och robot samarbete (förkortat HRC från engelskans Human-Robot Collaboration), där både människor och robotar arbetar samtidigt i samma miljö, är ett växande forskningsområde och har ökat dramatiskt över de senaste decenniet. För att detta samarbetet ska vara möjligt och säkert behöver robotarna genomgå en ordentlig säkerhetsanalys så att farliga situationer kan undvikas. Denna säkerhetsanalys inkluderar komplexa Computer Vision uppgifter som kräver mycket processorkraft. Därför kan inte robotar med begränsad processorkraft utföra dessa beräkningar utan fördröjning, utan måste istället förlita sig på utomstående infrastruktur för att exekvera dem. Vid vissa tillfällen kan dock denna utomstående infrastruktur inte finnas på plats eller vara svår att koppla upp sig till. Även vid dessa tillfällen måste robotar fortfarande kunna navigera sig själva genom en lokal, och samtidigt upprätthålla hög grad av säkerhet. Detta projekt fokuserar på att reducera komplexiteten och det totala antalet parametrar av för-tränade Computer Vision-modeller genom att använda modellkompressionstekniker så som: Beskärning och kunskapsdestilering. Dessa modellkompressionstekniker har starka teoretiska grunder och praktiska belägg, men mängden arbeten kring deras kombinerade effekt är begränsad, därför är just det undersökt i detta arbetet. Resultaten av det här projektet visar att up till 90% av det totala antalet parametrar hos en Computer Vision-modell kan tas bort utan någon noterbar försämring av modellens säkerhet.

Page generated in 0.0935 seconds