• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 35
  • 14
  • 10
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 381
  • 381
  • 381
  • 243
  • 162
  • 157
  • 137
  • 86
  • 83
  • 80
  • 79
  • 74
  • 68
  • 67
  • 65
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Convolution- compacted vision transformers for prediction of local wall heat flux at multiple Prandtl numbers in turbulent channel flow

Wang, Yuning January 2023 (has links)
Predicting wall heat flux accurately in wall-bounded turbulent flows is critical fora variety of engineering applications, including thermal management systems andenergy-efficient designs. Traditional methods, which rely on expensive numericalsimulations, are hampered by increasing complexity and extremly high computationcost. Recent advances in deep neural networks (DNNs), however, offer an effectivesolution by predicting wall heat flux using non-intrusive measurements derivedfrom off-wall quantities. This study introduces a novel approach, the convolution-compacted vision transformer (ViT), which integrates convolutional neural networks(CNNs) and ViT to predict instantaneous fields of wall heat flux accurately based onoff-wall quantities including velocity components at three directions and temperature.Our method is applied to an existing database of wall-bounded turbulent flowsobtained from direct numerical simulations (DNS). We first conduct an ablationstudy to examine the effects of incorporating convolution-based modules into ViTarchitectures and report on the impact of different modules. Subsequently, we utilizefully-convolutional neural networks (FCNs) with various architectures to identify thedistinctions between FCN models and the convolution-compacted ViT. Our optimizedViT model surpasses the FCN models in terms of instantaneous field predictions,learning turbulence statistics, and accurately capturing energy spectra. Finally, weundertake a sensitivity analysis using a gradient map to enhance the understandingof the nonlinear relationship established by DNN models, thus augmenting theinterpretability of these models
92

Pneumonia Detection using Convolutional Neural Network

Pillutla Venkata Sathya, Rohit 02 June 2023 (has links)
No description available.
93

Multispectral Processing of Side Looking Synthetic Aperture Acoustic Data for Explosive Hazard Detection

Murray, Bryce J 04 May 2018 (has links)
Substantial interest resides in identifying sensors, algorithms and fusion theories to detect explosive hazards. This is a significant research effort because it impacts the safety and lives of civilians and soldiers alike. However, a challenging aspect of this field is we are not in conflict with the threats (objects) per se. Instead, we are dealing with people and their changing strategies and preferred method of delivery. Herein, I investigate one method of threat delivery, side attack explosive ballistics (SAEB). In particular, I explore a vehicle-mounted synthetic aperture acoustic (SAA) platform. First, a wide band SAA signal is decomposed into a higher spectral resolution signal. Next, different multi/hyperspectral signal processing techniques are explored for manual band analysis and selection. Last, a convolutional neural network (CNN) is used for filter (e.g., enhancement and/or feature) learning and classification relative to the full signal versus different subbands. Performance is assessed in the context of receiver operating characteristic (ROC) curves on data from a U.S. Army test site that contains multiple target and clutter types, levels of concealment and times of day. Preliminary results indicate that a machine learned CNN solution can achieve better performance than our previously established human engineered Fraz feature with kernel support vector machine classification.
94

DRIVING-SCENE IMAGE CLASSIFICATION USING DEEP LEARNING NETWORKS: YOLOV4 ALGORITHM

Rahman, Muhammad Tamjid January 2022 (has links)
The objective of the thesis is to explore an approach of classifying and localizing different objects from driving-scene images using YOLOv4 algorithm trained on custom dataset.  YOLOv4, a one-stage object detection algorithm, aims to have better accuracy and speed. The deep learning (convolutional) network based classification model was trained and validated on a subject of SODA10M dataset annotated with six different classes of objects (Car, Cyclist, Truck, Bus, Pedestrian, and Tricycle), which are the most seen objects on the road. Another model based on YOLOv3 (the previous version of YOLOv4) will be trained on the same dataset and the performance will be compared with the YOLOv4 model. Both algorithms are fast but have difficulty detecting some objects, especially the small objects. Larger quantities of properly annotated training data can improve the algorithm's performance accuracy.
95

Low-Resolution Infrared and High-Resolution Visible Image Fusion Based on U-NET

Lin, Hsuan 11 August 2022 (has links)
No description available.
96

Automatic Image Segmentation for Hair Masking: two Methods

Vestergren, Sara, Zandpour, Navid January 2019 (has links)
We propose two different methods for image segmentation with the objective of marking contaminated regions in images from biochemical tests. The contaminated regions consists of thin hair or fibers and the purpose of this thesis is to eliminate the tedious task of masking the contaminated regions by hand by implementing automatic hair masking. Initially an algorithm based on Morphological Image Processing is presented, followed by solving the problem of pixelwise classification using a Convolutional Neural Network (CNN). Finally, the performance of each implementation is measured by comparing the segmented images with labelled images which are considered to be the ground truth. The result shows that both implementations have strong potential at successfully performing semantic segmentation on the images from the biochemical tests.
97

Transfer Learning for Image Processing Applications

Jansson, Christoffer, Jansson, Johanna January 2023 (has links)
Att träna neurala nätverk tar mycket tid och kan kräva extrema mängder data. Både träningstiden och mängden data som behövs kan minskas med transfer learning. I detta examensarbete studeras effekterna av transfer learning när ett neurala nätverk tränas på en liten datamängd. VGG16, MobileNeV3 och SqeezeNet används som förtränade modeller. Modellerna modifierades för att passa den nya datasetet. Ytterligare modifieringar gjordes för att testa om det kunde förbättra generaliseringen och minska träningstiden. Experimenten visade att transfer learning kan minska träningstiden och resulterade i modeller med bättre generalisering än slumpmässigt initialiserade modeller. Experimenten visade också att en modifierad version av SqeezeNet är den mest framgångsrika modellen. / Training neural networks takes a lot of time and can require extreme amounts of data. Both training time and the amount of data needed can be reduced with transfer learning. In this thesis the effects of transfer learning are studied when training a neural network on a small dataset. VGG16, MobileNeV3 and SqeezeNet are used as pre-trained models. The models were modified to fit the new dataset. Further modifications were made to test whether it could improve the generalization and reduced training time. The experiments showed that transfer learning can lead to shorter training time and resulted in models with better generalization than random initialized models. The experiments also showed that a modified version of SqeezeNet is the most successful model.
98

An Application of LatentCF++ on Providing Counterfactual Explanations for Fraud Detection

Giannopoulou, Maria-Sofia January 2023 (has links)
The aim of explainable machine learning is to aid humans in understanding how exactly complex machine learning models work. Machine learning models have offered great performance in various areas. However, the mechanisms behind how the model works and how decisions are being made remain unknown. This specific constraint increases the user’s hesitation to trust the results of the model and even to improve their performance further. Counterfactual explanation is one method to offer explainability in machine learning by indicating what would have happened if the input of a model was modified in a specific way. Fraud is the action of acquiring something from someone else in a dishonest manner. Companies’ and organizations’ vulnerability to malicious actions has been increasing due to the development of digitalization. Machine learning applications have been successfully put in place to tackle fraudulent actions. However, the severity of the impact of fraudulent actions has highlighted the need for further scientific exploration of the topic. The current research will attempt to do so by studying counterfactual explanations related to fraud detection. Latent-CF is a method for counterfactual generation that utilizes an autoencoder and gradient descent in its latent space. LatentCF++ is an extension of Latent-CF. It utilizes a classifier and an autoencoder. The aim is to perturb the encoded latent representation through a gradient descent optimization for counterfactual generation so that the initially undesired class is then classified with the desired prediction. Compared to Latent-CF, LatentCF++ uses Adam optimization and adds further constraints to ensure that the generated counterfactual’s class probability surpasses the set decision boundary. The research question the current thesis addresses is: “To what extent can LatentCF++ provide reliable counterfactual explanations in fraud detection?”. In order to provide an answer to this question, the study is applying an experiment to implement a new application of LatentCF++. The current experiment utilizes a onedimensional convolutional neural network as a classifier and a deep autoencoder for counterfactual generation in fraud data. This study reports satisfying results regarding counterfactual explanation production of LatentCF++ on fraud detection. The classification is quite accurate, while the reconstruction loss of the deep autoencoder employed is very low. The validity of the counterfactual examples produced is lower than the original study while the proximity is lower. Compared to baseline models, k-nearest neighbors outperform LatentCF++ in terms of validity and Feature Gradient Descent in terms of proximity.
99

A Deep Neural Network-Based Model for Named Entity Recognition for Hindi Language

Sharma, Richa, Morwal, Sudha, Agarwal, Basant, Chandra, Ramesh, Khan, Mohammad S. 01 October 2020 (has links)
The aim of this work is to develop efficient named entity recognition from the given text that in turn improves the performance of the systems that use natural language processing (NLP). The performance of IoT-based devices such as Alexa and Cortana significantly depends upon an efficient NLP model. To increase the capability of the smart IoT devices in comprehending the natural language, named entity recognition (NER) tools play an important role in these devices. In general, the NER is a two-step process that initially the proper nouns are identified from text and then classify them into predefined categories of entities such as person, location, measure, organization and time. NER is often performed as a subtask while processing natural languages which increases the accuracy level of a NLP task. In this paper, we propose deep neural network architecture for named entity recognition for the resource-scarce language Hindi, based on convolutional neural network (CNN), bidirectional long short-term memory (Bi-LSTM) neural network and conditional random field (CRF). In the proposed approach, initially, we use skip-gram word2vec model and GloVe model to represent words in semantic vectors which are further used in different deep neural network-based architectures. In the proposed approach, we use character- and word-level embedding to represent the text that includes information at fine-grained level. Due to the use of character-level embeddings, the proposed model is robust for the out-of-vocabulary words. Experimental results show that the combination of Bi-LSTM, CNN and CRF algorithms performs better as compared to the other baseline methods such as recurrent neural network, long short-term memory and Bi-LSTM individually.
100

Privacy Preserving Machine Learning as a Service

Hesamifard, Ehsan 05 1900 (has links)
Machine learning algorithms based on neural networks have achieved remarkable results and are being extensively used in different domains. However, the machine learning algorithms requires access to raw data which is often privacy sensitive. To address this issue, we develop new techniques to provide solutions for running deep neural networks over encrypted data. In this paper, we develop new techniques to adopt deep neural networks within the practical limitation of current homomorphic encryption schemes. We focus on training and classification of the well-known neural networks and convolutional neural networks. First, we design methods for approximation of the activation functions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree polynomials which is essential for efficient homomorphic encryption schemes. Then, we train neural networks with the approximation polynomials instead of original activation functions and analyze the performance of the models. Finally, we implement neural networks and convolutional neural networks over encrypted data and measure performance of the models.

Page generated in 0.138 seconds