• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Automated Prediction of Solar Flares from SDO Images Using Deep Learning

Abed, Ali K., Qahwaji, Rami S.R., Abed, A. 21 March 2021 (has links)
No / In the last few years, there has been growing interest in near-real-time solar data processing, especially for space weather applications. This is due to space weather impacts on both space-borne and ground-based systems, and industries, which subsequently impacts our lives. In the current study, the deep learning approach is used to establish an automated hybrid computer system for a short-term forecast; it is achieved by using the complexity level of the sunspot group on SDO/HMI Intensitygram images. Furthermore, this suggested system can generate the forecast for solar flare occurrences within the following 24 h. The input data for the proposed system are SDO/HMI full-disk Intensitygram images and SDO/HMI full-disk magnetogram images. System outputs are the “Flare or Non-Flare” of daily flare occurrences (C, M, and X classes). This system integrates an image processing system to automatically detect sunspot groups on SDO/HMI Intensitygram images using active-region data extracted from SDO/HMI magnetogram images (presented by Colak and Qahwaji, 2008) and deep learning to generate these forecasts. Our deep learning-based system is designed to analyze sunspot groups on the solar disk to predict whether this sunspot group is capable of releasing a significant flare or not. Our system introduced in this work is called ASAP_Deep. The deep learning model used in our system is based on the integration of the Convolutional Neural Network (CNN) and Softmax classifier to extract special features from the sunspot group images detected from SDO/HMI (Intensitygram and magnetogram) images. Furthermore, a CNN training scheme based on the integration of a back-propagation algorithm and a mini-batch AdaGrad optimization method is suggested for weight updates and to modify learning rates, respectively. The images of the sunspot regions are cropped automatically by the imaging system and processed using deep learning rules to provide near real-time predictions. The major results of this study are as follows. Firstly, the ASAP_Deep system builds on the ASAP system introduced in Colak and Qahwaji (2009) but improves the system with an updated deep learning-based prediction capability. Secondly, we successfully apply CNN to the sunspot group image without any pre-processing or feature extraction. Thirdly, our system results are considerably better, especially for the false alarm ratio (FAR); this reduces the losses resulting from the protection measures applied by companies. Also, the proposed system achieves a relatively high scores for True Skill Statistics (TSS) and Heidke Skill Score (HSS).
2

A multi-biometric iris recognition system based on a deep learning approach

Al-Waisy, Alaa S., Qahwaji, Rami S.R., Ipson, Stanley S., Al-Fahdawi, Shumoos, Nagem, Tarek A.M. 24 October 2017 (has links)
Yes / Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. In this paper, an efficient and real-time multimodal biometric system is proposed based on building deep learning representations for images of both the right and left irises of a person, and fusing the results obtained using a ranking-level fusion method. The trained deep learning system proposed is called IrisConvNet whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from the input image without any domain knowledge where the input image represents the localized iris region and then classify it into one of N classes. In this work, a discriminative CNN training scheme based on a combination of back-propagation algorithm and mini-batch AdaGrad optimization method is proposed for weights updating and learning rate adaptation, respectively. In addition, other training strategies (e.g., dropout method, data augmentation) are also proposed in order to evaluate different CNN architectures. The performance of the proposed system is tested on three public datasets collected under different conditions: SDUMLA-HMT, CASIA-Iris- V3 Interval and IITD iris databases. The results obtained from the proposed system outperform other state-of-the-art of approaches (e.g., Wavelet transform, Scattering transform, Local Binary Pattern and PCA) by achieving a Rank-1 identification rate of 100% on all the employed databases and a recognition time less than one second per person.

Page generated in 0.0487 seconds