• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 334
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 483
  • 245
  • 201
  • 189
  • 163
  • 137
  • 127
  • 112
  • 105
  • 102
  • 88
  • 88
  • 85
  • 83
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Evaluating and Improving the SEU Reliability of Artificial Neural Networks Implemented in SRAM-Based FPGAs with TMR

Wilson, Brittany Michelle 23 June 2020 (has links)
Artificial neural networks (ANNs) are used in many types of computing applications. Traditionally, ANNs have been implemented in software, executing on CPUs and even GPUs, which capitalize on the parallelizable nature of ANNs. More recently, FPGAs have become a target platform for ANN implementations due to their relatively low cost, low power, and flexibility. Some safety-critical applications could benefit from ANNs, but these applications require a certain level of reliability. SRAM-based FPGAs are sensitive to single-event upsets (SEUs), which can lead to faults and errors in execution. However there are techniques that can mask such SEUs and thereby improve the overall design reliability. This thesis evaluates the SEU reliability of neural networks implemented in SRAM-based FPGAs and investigates mitigation techniques against upsets for two case studies. The first was based on the LeNet-5 convolutional neural network and was used to test an implementation with both fault injection and neutron radiation experiments, demonstrating that our fault injection experiments could accurately evaluate SEU reliability of the networks. SEU reliability was improved by selectively applying TMR to the most critical layers of the design, achieving a 35% improvement reliability at an increase in 6.6% resources. The second was an existing neural network called BNN-PYNQ. While the base design was more sensitive to upsets than the CNN previous tested, the TMR technique improved the reliability by approximately 7× in fault injection experiments.
362

Analyzing Cell Painting images using different CNNs and Conformal Prediction variations : Optimization of a Deep Learning model to predict the MoA of different drugs

Hillver, Anna January 2022 (has links)
Microscopy imaging based techniques, such as the Cell Painting assay, could be used to generate images that visualize the Mechanism of Action (MoA) of a drug, which could be of great use in drug development. In order to extract information and predict the MoA of a new compound from these images we need powerful image analysis tools. The purpose with this project is to further develop a Deep Learning model to predict the MoA of different drugs from Cell Painting images using Convolutional Neural Networks (CNNs) and Conformal Prediction. The specific task was to compare the accuracy of different CNN architectures and to compare the efficiency of different nonconformity functions.  During the project the CNN architectures ResNet50, ResNet101 and DenseNet121 were compared as well as the nonconformity functions Inverse Probability, Margin and a combination of them both. No significant difference in accuracy between the CNNs and no difference in efficiency between the nonconformity functions was measured. The results showed that the model could predict the MoA of a compound with high accuracy when all compounds were used both in training, validation and test of the model, which validates the implementations. However, it is desirable for the model to be able to predict the MoA of a new compound if the model has been trained on other compounds with the same MoA. This could not be confirmed through this project and the model needs to be further investigated and tested with another dataset in order to be used for that purpose.
363

Multimodal Model for Construction Site Aversion Classification

Appelstål, Michael January 2020 (has links)
Aversion on construction sites can be everything from missingmaterial, fire hazards, or insufficient cleaning. These aversionsappear very often on construction sites and the construction companyneeds to report and take care of them in order for the site to runcorrectly. The reports consist of an image of the aversion and atext describing the aversion. Report categorization is currentlydone manually which is both time and cost-ineffective. The task for this thesis was to implement and evaluate an automaticmultimodal machine learning classifier for the reported aversionsthat utilized both the image and text data from the reports. Themodel presented is a late-fusion model consisting of a Swedish BERTtext classifier and a VGG16 for image classification. The results showed that an automated classifier is feasible for thistask and could be used in real life to make the classification taskmore time and cost-efficient. The model scored a 66.2% accuracy and89.7% top-5 accuracy on the task and the experiments revealed someareas of improvement on the data and model that could be furtherexplored to potentially improve the performance.
364

Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment

Gaikwad, Akash S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
365

Enhancing Hurricane Damage Assessment from Satellite Images Using Deep Learning

Berezina, Polina January 2020 (has links)
No description available.
366

Permanganate Reaction Kinetics and Mechanisms and Machine Learning Application in Oxidative Water Treatment

Zhong, Shifa 21 June 2021 (has links)
No description available.
367

A Deep Learning Approach To Vehicle Fault Detection Based On Vehicle Behavior

Khaliqi, Rafi, Iulian, Cozma January 2023 (has links)
Vehicles and machinery play a crucial role in our daily lives, contributing to our transportationneeds and supporting various industries. As society strives for sustainability, the advancementof technology and efficient resource allocation become paramount. However, vehicle faultscontinue to pose a significant challenge, leading to accidents and unfortunate consequences.In this thesis, we aim to address this issue by exploring the effectiveness of an ensemble ofdeep learning models for supervised classification. Specifically, we propose to evaluate the performance of 1D-CNN-Bi-LSTM and 1D-CNN-Bi-GRU models. The Bi-LSTM and Bi-GRUmodels incorporate a multi-head attention mechanism to capture intricate patterns in the data.The methodology involves initial feature extraction using 1D-CNN, followed by learning thetemporal dependencies in the time series data using Bi-LSTM and Bi-GRU. These models aretrained and evaluated on a labeled dataset, yielding promising results. The successful completion of this thesis has met the objectives and scope of the research, and it also paves the way forfuture investigations and further research in this domain.
368

Multiclass Brain Tumour Tissue Classification on Histopathology Images Using Vision Transformers

Spyretos, Christoforos January 2023 (has links)
Histopathology refers to inspecting and analysing tissue samples under a microscope to identify and examine signs of diseases. The manual investigation procedure of histology slides by pathologists is time-consuming and susceptible to misconceptions. Deep learning models have demonstrated outstanding performance in digital histopathology, providing doctors and clinicians with immediate and reliable decision-making assistance in their workflow. In this study, deep learning models, including vision transformers (ViT) and convolutional neural networks (CNN), were employed to compare their performance in patch-level classification task on feature annotations of glioblastoma multiforme in H\&E histology whole slide images (WSI). The dataset utilised in this study was obtained from the Ivy Glioblastoma Atlas Project (IvyGAP). The pre-processing steps included stain normalisation of the images, and patches of size 256x256 pixels were extracted from the WSIs. In addition, the per-subject split method was implemented to prevent data leakage between the training, validation and test sets. Three models were employed to perform the classification task on the IvyGAP data image, two scratch-trained models, a ViT and a CNN (variant of VGG16), and a pre-trained ViT. The models were assessed using various metrics such as accuracy, f1-score, confusion matrices, Matthews correlation coefficient (MCC), area under the curve (AUC) and receiver operating characteristic (ROC) curves. In addition, experiments were conducted to calibrate the models to reflect the ground truth of the task using the temperature scale technique, and their uncertainty was estimated through the Monte Carlo dropout approach. Lastly, the models were statistically compared using the Wilcoxon signed-rank test. Among the evaluated models, the scratch-trained ViT exhibited the best test accuracy of 67%, with an MCC of 0.45. The scratch-trained CNN obtained a test accuracy of 49% and an MCC of 0.15. However, the pre-trained ViT only achieved a test accuracy of 28% and an MCC of 0.034. The reliability diagrams and metrics indicated that the scratch-trained ViT demonstrated better calibration. After applying temperature scaling, only the scratch-trained CNN showed improved calibration. Therefore, the calibrated CNN was used for subsequent experiments. The scratch-trained ViT and calibrated CNN illustrated different uncertainty levels. The scratch-trained ViT had moderate uncertainty, while the calibrated CNN exhibited modest to high uncertainty across classes. The pre-trained ViT had an overall high uncertainty. Finally, the results of the statistical tests reported that the scratch-trained ViT model performed better among the three models at a significant level of approximately 0.0167 after applying the Bonferroni correction.  In conclusion, the scratch-trained ViT model achieved the highest test accuracy and better class discrimination. In contrast, the scratch-trained CNN and pre-trained ViT performed poorly and were comparable to random classifiers. The scratch-trained ViT demonstrated better calibration, while the calibrated CNN showed varying levels of uncertainty. The statistical tests demonstrated no statistical difference among the models.
369

A CNN-based Analysis of Radiological Parameters from CT images : Improving Surgical Outcomes in Soft Tissue Sarcoma Patients with Pulmonary Metastases

Solander, Klara January 2023 (has links)
Soft tissue sarcoma (STS) patients with pulmonary metastases (PM) experience a significant decrease in 5-year survival rates, ranging from 15 % to 50 % compared to 81 % without metastases. Despite this clinical challenge, there is a lack of consensus regarding the optimal treatment approach for PM in STS. To address this, a convolutional neural network (CNN) was developed, utilising transfer learning from a MED3D base model with added custom layers. The CNN aimed to predict surgical treatment response and extract relevant radiological parameters via attribution maps from the CT images of PMs.  The CNN demonstrated promising performance with a balanced distribution of true positive and true negative predictions, giving precision, recall and F1-scores of 0.8. However, the limited size of the data set calls for caution in interpreting the statistical validity of these results.  The evaluation of the attribution maps revealed the classifier assigning significance to regions lacking anatomical relevance, except for one region – the dorsal lobe near a metastasis – showing lower blood vessel density. Nonetheless, no definitive pathological conclusions can be drawn from this observation currently.  In conclusion, this study presents a CNN-based approach for predicting surgical treatment response in STS patients with PMs. However, the small data set warrants further validation and exploration of clinical implications associated with the identified regions of significance.
370

Narrow Pretraining of Deep Neural Networks : Exploring Autoencoder Pretraining for Anomaly Detection on Limited Datasets in Non-Natural Image Domains

Eriksson, Matilda, Johansson, Astrid January 2022 (has links)
Anomaly detection is the process of detecting samples in a dataset that are atypical or abnormal. Anomaly detection can for example be of great use in an industrial setting, where faults in the manufactured products need to be detected at an early stage. In this setting, the available image data might be from different non-natural domains, such as the depth domain. However, the amount of data available is often limited in these domains. This thesis aims to investigate if a convolutional neural network (CNN) can be trained to perform anomaly detection well on limited datasets in non-natural image domains. The attempted approach is to train the CNN as an autoencoder, in which the CNN is the encoder network. The encoder is then extracted and used as a feature extractor for the anomaly detection task, which is performed using Semantic Pyramid Anomaly Detection (SPADE). The results are then evaluated and analyzed. Two autoencoder models were used in this approach. As the encoder network, one of the models uses a MobileNetV3-Small network that had been pretrained on ImageNet, while the other uses a more basic network, which is a few layers deep and initialized with random weights. Both these networks were trained as regular convolutional autoencoders, as well as variational autoencoders. The results were compared to a MobileNetV3-Small network that had been pretrained on ImageNet, but had not been trained as an autoencoder. The models were tested on six different datasets, all of which contained images from the depth and intensity domains. Three of these datasets additionally contained images from the scatter domain, and for these datasets, the combination of all three domains was tested as well. The main focus was however on the performance in the depth domain. The results show that there is generally an improvement when training the more complex autoencoder on the depth domain. Furthermore, the basic network generally obtains an equivalent result to the more complex network, suggesting that complexity is not necessarily an advantage for this approach. Looking at the different domains, there is no apparent pattern to which domain yields the best performance. This rather seems to depend on the dataset. Lastly, it was found that training the networks as variational autoencoders did generally not improve the performance in the depth domain compared to the regular autoencoders. In summary, an improved anomaly detection was obtained in the depth domain, but for optimal anomaly detection with regard to domain and network, one must look at the individual datasets. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>

Page generated in 0.0484 seconds