• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1768
  • 57
  • 52
  • 38
  • 36
  • 30
  • 17
  • 11
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2553
  • 2553
  • 1058
  • 909
  • 800
  • 587
  • 554
  • 466
  • 463
  • 430
  • 415
  • 409
  • 389
  • 385
  • 356
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

FAULT DIAGNOSIS OF ENGINE KNOCKING USING DEEP LEARNING NEURAL NETWORKS WITH ACOUSTIC INPUT PROCESSING

Muzammil Ahmed Shaik (14241236) 12 December 2022 (has links)
<p>  </p> <p>The engine is the heart of the vehicle; any problems with this component will cause significant damage and may even result in the car being junked. The engine repair cost is enormous, and there is no guarantee that the existing engine will be repaired or replaced. Fault diagnosis in engines is critical; there have been numerous techniques and tools used for fault diagnosis in this revolutionary world, which require some extra cost to detect and still cannot detect faults such as knocking. The engine can have several problems but knocking is the major issue that blows up the engine and results in the breakdown of the vehicle. Our research focuses on this key issue which not only costs thousands of dollars but also results in waste. According to experts, at a very early stage, knocking can be detected by human senses, either visually or audibly. The most noticeable feature in detecting engine faults is the knocking sound.  Artificial intelligence deep learning neural networks are well known for their ability to simulate humans; we can utilize this domain to train the networks on sound to detect engine knocking. Many neural networks have been designed for various purposes, one of which is classification. The best widely used and reliable network is the convolution neural network (CNN) which takes input as images and classifies them respectively. Engine sounds have been collected from Google’s Machine Perception research. Our research shows that a prominent feature in building these networks is data. Understanding data and making the most of it is central to data science. A better model is created by meaningful data, not just by designing a complex network. We have used a new algorithmic method of extracting sound and feeding it into all variants of CNN, which we call dependent vehicle sound extraction, in which we use fast Fourier transform (FFT), short-time Fourier transform (STFT), and Mel-frequency cepstral coefficients (MFCCs) for processing input sound signals. We validated the utilization of deep learning networks with a unique dependent vehicle feature extraction technique to detect engine knocking with accurate classification.</p>
22

Pointwise and Instance Segmentation for 3D Point Cloud

Gujar, Sanket 11 April 2019 (has links)
The camera is the cheapest and computationally real-time option for detecting or segmenting the environment for an autonomous vehicle, but it does not provide the depth information and is undoubtedly not reliable during the night, bad weather, and tunnel flash outs. The risk of an accident gets higher for autonomous cars when driven by a camera in such situations. The industry has been relying on LiDAR for the past decade to solve this problem and focus on depth information of the environment, but LiDAR also has its shortcoming. The industry methods commonly use projections methods to create a projection image and run detection and localization network for inference, but LiDAR sees obscurants in bad weather and is sensitive enough to detect snow, making it difficult for robustness in projection based methods. We propose a novel pointwise and Instance segmentation deep learning architecture for the point clouds focused on self-driving application. The model is only dependent on LiDAR data making it light invariant and overcoming the shortcoming of the camera in the perception stack. The pipeline takes advantage of both global and local/edge features of points in points clouds to generate high-level feature. We also propose Pointer-Capsnet which is an extension of CapsNet for small 3D point clouds.
23

The dynamics of learning, teaching and assessment : a study of innovative practice at undergraduate level

Youngman, Andrea January 1999 (has links)
No description available.
24

Automatic Tongue Contour Segmentation using Deep Learning

Wen, Shuangyue 30 October 2018 (has links)
Ultrasound is one of the primary technologies used for clinical purposes. Ultrasound systems have favorable real-time capabilities, are fast and relatively inexpensive, portable and non-invasive. Recent interest in using ultrasound imaging for tongue motion has various applications in linguistic study, speech therapy as well as in foreign language education, where visual-feedback of tongue motion complements conventional audio feedback. Ultrasound images are known to be difficult to recognize. The anatomical structure in them, the rapidity of tongue movements, also missing segments in some frames and the limited frame rate of ultrasound systems have made automatic tongue contour extraction and tracking very challenging and especially hard for real-time applications. Traditional image processing-based approaches have many practical limitations in terms of automation, speed, and accuracy. Recent progress in deep convolutional neural networks has been successfully exploited in a variety of computer vision problems such as detection, classification, and segmentation. In the past few years, deep belief networks for tongue segmentation and convolutional neural networks for the classification of tongue motion have been proposed. However, none of these claim fully-automatic or real-time performance. U-Net is one of the most popular deep learning algorithms for image segmentation, and it is composed of several convolutions and deconvolution layers. In this thesis, we proposed a fully automatic system to extract tongue dorsum from ultrasound videos in real-time using a simplified version of U-Net, which we call sU-Net. Two databases from different machines were collected, and different training schemes were applied for testing the learning capability of the model. Our experiment on ultrasound video data demonstrates that the proposed method is very competitive compared with other methods in terms of performance and accuracy.
25

Learning speaker-specific characteristics with deep neural architecture

Salman, Ahmad January 2012 (has links)
Robust Speaker Recognition (SR) has been a focus of attention for researchers since long. The advancement in speech-aided technologies especially biometrics highlights the necessity of foolproof SR systems. However, the performance of a SR system critically depends on the quality of speech features used to represent the speaker-specific information. This research aims at extracting the speaker-specific information from Mel-frequency Cepstral Coefficients (MFCCs) using deep learning. Speech is a mixture of various information components that include linguistic, speaker-specific and speaker’s emotional state information. Feature extraction for each information component is inevitable in different speech-related tasks for robust performance. However, almost all forms of speech representation carry all the information as a whole, which is responsible for the compromised performances by SR systems. Motivated by the complex problem solving ability of deep architectures by learning high-level task-specific information in the data, we propose a novel Deep Neural Architecture (DNA) to extract speaker-specific information (SI) from MFCCs, a popular frequency domain speech signal representation. A two-stage learning strategy is adopted, which is based on unsupervised training for network initialisation followed by regularised contrastive learning. To train our network in the 2nd stage, we devise a contrastive loss function to discriminate the speakers on the basis of their intrinsic statistical patterns, distributed in the representations yielded by our deep network. This is achieved in the contrastive pair-wise comparison of these representations for similar or dissimilar speakers. To improve the generalisation and reduce the interference of environmental effects with the speaker-specific representation, we regulate the contrastive loss with the data reconstruction loss in a multi-objective optimisation. A detailed study has been done to analyse the parametric space in training the proposed deep architecture for optimum performance. Finally we compare the performance of our learned speaker-specific representations with several state-of-the-art techniques in speaker verification and speaker segmentation tasks. It is evident that the representations acquired through learned DNA are invariant and comparatively less sensitive to the text, language and environmental variability.
26

Improving Capsule Networks using zero-skipping and pruning

Sharifi, Ramin 15 November 2021 (has links)
Capsule Networks are the next generation of image classifiers. Although they have several advantages over conventional Convolutional Neural Networks (CNNs), they remain computationally heavy. Since inference on Capsule Networks is timeconsuming, thier usage becomes limited to tasks in which latency is not essential. Approximation methods in Deep Learning help networks lose redundant parameters to increase speed and lower energy consumption. In the first part of this work, we go through an algorithm called zero-skipping. More than 50% of trained CNNs consist of zeros or values small enough to be considered zero. Since multiplication by zero is a trivial operation, the zero-skipping algorithm can play a massive role in speed increase throughout the network. We investigate the eligibility of Capsule Networks for this algorithm on two different datasets. Our results suggest that Capsule Networks contain enough zeros in their Primary Capsules to benefit from this algorithm. In the second part of this thesis, we investigate pruning as one of the most popular Neural Network approximation methods. Pruning is the act of finding and removing neurons which have low or no impact on the output. We run experiments on four different datasets. Pruning Capsule Networks results in the loss of redundant Primary Capsules. The results show a significant increase in speed with a minimal drop in accuracy. We also, discuss how dataset complexity affects the pruning strategy. / Graduate
27

A Dual-Branch Attention Guided Context Aggregation Network for NonHomogeneous Dehazing

Song, Xiang January 2021 (has links)
Image degradation arises from various environmental conditions due to the exis tence of aerosols such as fog, haze, and dust. These phenomena mitigate image vis ibility by creating color distortion, reducing contrast, and fainting object surfaces. Although the end-to-end deep learning approach has made significant progress in the field of homogeneous dehazing, the image quality of these algorithms in the context of non-homogeneous real-world images has not yet been satisfactory. We argue two main reasons that are responsible for the problem: 1) First, due to the unbalanced information processing of the high-level and low-level information in conventional dehazing algorithms, 2) due to lack of trainable data pairs. To ad dress the above two problems, we propose a parallel dual-branch design that aims to balance the processing of high-level and low-level information, and through a method of transfer learning, utilize the small data sets to their full potential. The results from the two parallel branches are aggregated in a simple fusion tail, in which the high-level and low-level information are fused, and the final result is generated. To demonstrate the effectiveness of our proposed method, we present extensive experimental results in the thesis. / Thesis / Master of Applied Science (MASc)
28

Towards Structured Prediction in Bioinformatics with Deep Learning

Li, Yu 01 November 2020 (has links)
Using machine learning, especially deep learning, to facilitate biological research is a fascinating research direction. However, in addition to the standard classi cation or regression problems, whose outputs are simple vectors or scalars, in bioinformatics, we often need to predict more complex structured targets, such as 2D images and 3D molecular structures. The above complex prediction tasks are referred to as structured prediction. Structured prediction is more complicated than the traditional classi cation but has much broader applications, especially in bioinformatics, considering the fact that most of the original bioinformatics problems have complex output objects. Due to the properties of those structured prediction problems, such as having problem-speci c constraints and dependency within the labeling space, the straightforward application of existing deep learning models on the problems can lead to unsatisfactory results. In this dissertation, we argue that the following two ideas can help resolve a wide range of structured prediction problems in bioinformatics. Firstly, we can combine deep learning with other classic algorithms, such as probabilistic graphical models, which model the problem structure explicitly. Secondly, we can design and train problem-speci c deep learning architectures or methods by considering the structured labeling space and problem constraints, either explicitly or implicitly. We demonstrate our ideas with six projects from four bioinformatics sub elds, including sequencing analysis, structure prediction, function annotation, and network analysis. The structured outputs cover 1D electrical signals, 2D images, 3D structures, hierarchical labeling, and heterogeneous networks. With the help of the above ideas, all of our methods can achieve state-of-the-art performance on the corresponding problems. The success of these projects motivates us to extend our work towards other more challenging but important problems, such as health-care problems, which can directly bene t people's health and wellness. We thus conclude this thesis by discussing such future works, and the potential challenges and opportunities.
29

Predicting the future high-risk SARS-CoV-2 variants with deep learning

Chen, NingNing 04 July 2022 (has links)
SARS-CoV-2 has plagued the world since 2019 with continuously emergence of new variants, resulting in repeated waves of outbreak. Although the countermeasures like vaccination campaign has taken worldwide, the sophisticated virus mutated to escape immune system, threatening the public health. To win the race with the virus and ultimately end the pandemic, we have to take one step ahead to predict how the SARSCoV-2 might evolve and defeat it at the beginning of a new wave. Hence, we proposed a deep learning based framework to first build a deep learning model to shape the fitness landscape of the virus and then use genetic algorithm to predict the high-risk variants that might appear in the future. By combining pre-trained protein language model and structure modeling, the model is trained in a supervised way, predicting the viral transmissibility and antibodies escape ability to eight antibodies simultaneously. The prevenient virus evolution trajectory can be largely recovered by our model with high correlation to their sampling time. Novel mutations predicted by our model show high antibody escape through in silico simulation and overlapped with the mutations developed in prevenient infected patients. Overall, our scheme can provide insights into the evolution of SARS-CoV-2 and hopefully guide the development of vaccination and increase the preparedness.
30

Estimation of Predictive Uncertainty in the Supervised Segmentation of Magnetic Resonance Imaging (MRI) Diffusion Images Using Deep Ensemble Learning / ESTIMATING PREDICTIVE UNCERTAINTY IN DEEP LEARNING SEGMENTATION FOR DIFFUSION MRI

McCrindle, Brian January 2021 (has links)
With the desired deployment of Artificial Intelligence (AI), concerns over whether AI can “communicate” why it has made its decisions is of particular importance. In this thesis, we utilize predictive entropy (PE) as an surrogate for predictive uncertainty and report it for various test-time conditions that alter the testing distribution. This is done to evaluate the potential for PE to indicate when users should trust or dis- trust model predictions under dataset shift or out-of-distribution (OOD) conditions, two scenarios that are prevalent in real-world settings. Specifically, we trained an ensemble of three 2D-UNet architectures to segment synthetically damaged regions in fractional anisotropy scalar maps, a widely used diffusion metric to indicate mi- crostructural white-matter damage. Baseline ensemble statistics report that the true positive rate, false negative rate, false positive rate, true negative rate, Dice score, and precision are 0.91, 0.091, 0.23, 0.77, 0.85, and 0.80, respectively. Test-time PE was reported before and after the ensemble was exposed to increasing geometric distortions (OOD), adversarial examples (OOD), and decreasing signal-to-noise ratios (dataset shift). We observed that even though PE shows a strong negative correlation with model performance for increasing adversarial severity (ρAE = −1), this correlation is not seen under distortion or SNR conditions (ρD = −0.26, ρSNR = −0.30). However, the PE variability (PE-Std) between individual model predictions was shown to be a better indicator of uncertainty as strong negative correlations between model performance and PE-Std were seen during geometric distortions and adversarial ex- amples (ρD = −0.83, ρAE = −1). Unfortunately, PE fails to report large absolute uncertainties during these conditions, thus restricting the analysis to correlative relationships. Finally, determining an uncertainty threshold between “certain” and “uncertain” model predictions was seen to be heavily dependant on model calibra- tion. For augmentation conditions close to the training distribution, a single threshold could be hypothesized. However, caution must be taken if such a technique is clinically applied, as model miscalibration could nullify such a threshold for samples far from the distribution. To ensure that PE or PE-Std could be used more broadly for uncertainty estimation, further work must be completed. / Thesis / Master of Applied Science (MASc)

Page generated in 0.0823 seconds