• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
791

Deep face recognition using imperfect facial data

Elmahmudi, Ali A.M., Ugail, Hassan 27 April 2019 (has links)
Yes / Today, computer based face recognition is a mature and reliable mechanism which is being practically utilised for many access control scenarios. As such, face recognition or authentication is predominantly performed using ‘perfect’ data of full frontal facial images. Though that may be the case, in reality, there are numerous situations where full frontal faces may not be available — the imperfect face images that often come from CCTV cameras do demonstrate the case in point. Hence, the problem of computer based face recognition using partial facial data as probes is still largely an unexplored area of research. Given that humans and computers perform face recognition and authentication inherently differently, it must be interesting as well as intriguing to understand how a computer favours various parts of the face when presented to the challenges of face recognition. In this work, we explore the question that surrounds the idea of face recognition using partial facial data. We explore it by applying novel experiments to test the performance of machine learning using partial faces and other manipulations on face images such as rotation and zooming, which we use as training and recognition cues. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the cheek. We also study the effect of face recognition subject to facial rotation as well as the effect of recognition subject to zooming out of the facial images. Our experiments are based on using the state of the art convolutional neural network based architecture along with the pre-trained VGG-Face model through which we extract features for machine learning. We then use two classifiers namely the cosine similarity and the linear support vector machines to test the recognition rates. We ran our experiments on two publicly available datasets namely, the controlled Brazilian FEI and the uncontrolled LFW dataset. Our results show that individual parts of the face such as the eyes, nose and the cheeks have low recognition rates though the rate of recognition quickly goes up when individual parts of the face in combined form are presented as probes.
792

A novel application of deep learning with image cropping: a smart cities use case for flood monitoring

Mishra, Bhupesh K., Thakker, Dhaval, Mazumdar, S., Neagu, Daniel, Gheorghe, Marian, Simpson, Sydney 13 February 2020 (has links)
Yes / Event monitoring is an essential application of Smart City platforms. Real-time monitoring of gully and drainage blockage is an important part of flood monitoring applications. Building viable IoT sensors for detecting blockage is a complex task due to the limitations of deploying such sensors in situ. Image classification with deep learning is a potential alternative solution. However, there are no image datasets of gullies and drainages. We were faced with such challenges as part of developing a flood monitoring application in a European Union-funded project. To address these issues, we propose a novel image classification approach based on deep learning with an IoT-enabled camera to monitor gullies and drainages. This approach utilises deep learning to develop an effective image classification model to classify blockage images into different class labels based on the severity. In order to handle the complexity of video-based images, and subsequent poor classification accuracy of the model, we have carried out experiments with the removal of image edges by applying image cropping. The process of cropping in our proposed experimentation is aimed to concentrate only on the regions of interest within images, hence leaving out some proportion of image edges. An image dataset from crowd-sourced publicly accessible images has been curated to train and test the proposed model. For validation, model accuracies were compared considering model with and without image cropping. The cropping-based image classification showed improvement in the classification accuracy. This paper outlines the lessons from our experimentation that have a wider impact on many similar use cases involving IoT-based cameras as part of smart city event monitoring platforms. / European Regional Development Fund Interreg project Smart Cities and Open Data REuse (SCORE).
793

The Automated Prediction of Solar Flares from SDO Images Using Deep Learning

Abed, Ali K., Qahwaji, Rami S.R., Abed, A. 21 March 2021 (has links)
No / In the last few years, there has been growing interest in near-real-time solar data processing, especially for space weather applications. This is due to space weather impacts on both space-borne and ground-based systems, and industries, which subsequently impacts our lives. In the current study, the deep learning approach is used to establish an automated hybrid computer system for a short-term forecast; it is achieved by using the complexity level of the sunspot group on SDO/HMI Intensitygram images. Furthermore, this suggested system can generate the forecast for solar flare occurrences within the following 24 h. The input data for the proposed system are SDO/HMI full-disk Intensitygram images and SDO/HMI full-disk magnetogram images. System outputs are the “Flare or Non-Flare” of daily flare occurrences (C, M, and X classes). This system integrates an image processing system to automatically detect sunspot groups on SDO/HMI Intensitygram images using active-region data extracted from SDO/HMI magnetogram images (presented by Colak and Qahwaji, 2008) and deep learning to generate these forecasts. Our deep learning-based system is designed to analyze sunspot groups on the solar disk to predict whether this sunspot group is capable of releasing a significant flare or not. Our system introduced in this work is called ASAP_Deep. The deep learning model used in our system is based on the integration of the Convolutional Neural Network (CNN) and Softmax classifier to extract special features from the sunspot group images detected from SDO/HMI (Intensitygram and magnetogram) images. Furthermore, a CNN training scheme based on the integration of a back-propagation algorithm and a mini-batch AdaGrad optimization method is suggested for weight updates and to modify learning rates, respectively. The images of the sunspot regions are cropped automatically by the imaging system and processed using deep learning rules to provide near real-time predictions. The major results of this study are as follows. Firstly, the ASAP_Deep system builds on the ASAP system introduced in Colak and Qahwaji (2009) but improves the system with an updated deep learning-based prediction capability. Secondly, we successfully apply CNN to the sunspot group image without any pre-processing or feature extraction. Thirdly, our system results are considerably better, especially for the false alarm ratio (FAR); this reduces the losses resulting from the protection measures applied by companies. Also, the proposed system achieves a relatively high scores for True Skill Statistics (TSS) and Heidke Skill Score (HSS).
794

A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262

Vasudevan, Vinod, Abdullatif, Amr R.A., Kabir, Sohag, Campean, Felician 10 December 2021 (has links)
Yes / Assuring safety and thereby certifying is a key challenge of many kinds of Machine Learning (ML) Models. ML is one of the most widely used technological solutions to automate complex tasks such as autonomous driving, traffic sign recognition, lane keep assist etc. The application of ML is making a significant contributions in the automotive industry, it introduces concerns related to the safety and security of these systems. ML models should be robust and reliable throughout and prove their trustworthiness in all use cases associated with vehicle operation. Proving confidence in the safety and security of ML-based systems and there by giving assurance to regulators, the certification authorities, and other stakeholders is an important task. This paper proposes a framework to handle uncertainties of ML model to improve the safety level and thereby certify the ML Models in the automotive industry.
795

Noninvasive assessment and classification of human skin burns using images of Caucasian and African patients

Abubakar, Aliyu, Ugail, Hassan, Bukar, Ali M. 20 March 2022 (has links)
Yes / Burns are one of the obnoxious injuries subjecting thousands to loss of life and physical defacement each year. Both high income and Third World countries face major evaluation challenges including but not limited to inadequate workforce, poor diagnostic facilities, inefficient diagnosis and high operational cost. As such, there is need to develop an automatic machine learning algorithm to noninvasively identify skin burns. This will operate with little or no human intervention, thereby acting as an affordable substitute to human expertise. We leverage the weights of pretrained deep neural networks for image description and, subsequently, the extracted image features are fed into the support vector machine for classification. To the best of our knowledge, this is the first study that investigates black African skins. Interestingly, the proposed algorithm achieves state-of-the-art classification accuracy on both Caucasian and African datasets.
796

Investigating patterns of deep sea coral and sponge diversity and abundance across multiple spatial scales in the Central Pacific

Kennedy, Brian R.C. 01 November 2023 (has links)
The deep sea is the largest ecosystem on the planet, comprising more than 90% of the volume that life can inhabit, yet it is the least explored biome in the world. The deep sea includes the benthos, which makes up 91.5 % of all the seafloor globally, and the water column deeper than 200 meters. It hosts a wealth of ecosystems including deep-sea vents, seamount coral gardens, abyssal plains, high-productivity whale falls, and life even in the deepest trenches. We now understand that all of these ecosystems host a variety of habitats, each with their own ecology and unique species. These ecosystems and habitats- and their associated biodiversity- provide essential ecosystem services such as carbon sequestration, nutrient regeneration, microbial processes detoxification, fisheries provisioning, and many others. However, despite the uniqueness of these ecosystems and the importance of the services they provide, we still know far less about them than we do about their shallow water and terrestrial counterparts. In this dissertation, I contribute new insights about the patterns of biodiversity in the Pacific Ocean across a large geographic area, and across a wide range of depths. To that end, in Chapter 1, I have used one of the largest ocean exploration datasets to look for patterns of the abundance and diversity across the most common benthic invertebrate families found on Pacific seamounts: Anthozoa, Porifera, and Echinodermata across the Central and Western Pacific. In addition to quantifying the diversity and abundance of known taxa, I also documented patterns of as-of-yet unidentified taxa by region, depth, and deepwater feature (seamount shape). Building on patterns associated with seamount shape that were described in Chapter 2, I focused on the effect of seamount shape on the diversity and abundance of deep-sea coral communities in Chapter 3. The analysis presented in Chapter 3 provides strong support for the novel hypothesis that gross seamount morphology is a significant driver of community composition. In Chapter 4, I focused on a single seamount to investigate biodiversity and abundance of coral and sponge taxa on a finer spatial scale, examining the role of direction (N, S, E, W) on different flanks of a single equatorial seamount. This analysis yielded interesting consistent patterns of zonation on all sides of the seamount in terms of depth, but with differences in abundance patterns on each flank for individual taxa. Finally, in Chapter 5, I took a global perspective to investigate gaps in deepwater data, with the goal of determining what regions need further exploration to conclusively determine patterns of deep-sea biodiversity, which will be critical for determining the health of deepwater ecosystems under climate change conditions with increased exploitation pressure and cooccuring with increased conservation efforts. Merging Ocean Biogeographic Information System (OBIS) records with the largest collection of deep submergence dive records ever collected, I used proposed biogeographic provinces schema to identify areas with the least supporting data. Additionally, I coupled records from OBIS with climate change projections to identify the areas with the fewest number of biodiversity records that are likely to change the fastest under different IPCC projections. These areas of low number of records and high likelihood of change by the end of the century should become priority targets for future exploration. Taken together, this dissertation provides valuable insights and generates new hypotheses about patterns and drivers of deep-sea biodiversity, and puts forth recommendations for future research and exploration efforts.
797

‘You end up with nothing’: the experience of being a statistic of ‘in-work poverty’ in the UK

McBride, Jo, Smith, Andrew J., Mbala, M. 17 October 2017 (has links)
Yes / Set in the context of the recent unprecedented upsurge of in-work poverty (IWP) in the UK – which currently exceeds out of work poverty – this article presents an account of the realities of experiencing poverty and being employed. Central issues of low-pay, limited working hours, underemployment and constrained employment opportunities combine to generate severe financial complexities and challenges. This testimony, taken comparatively over a year, reveals the experiences of, not only IWP, but of deep poverty, and having insufficient wages to fulfil the basic essentials of nourishing food and adequate clothing. This article contributes to current academic and social policy debates around low-paid work, IWP, the use of foodbanks and underemployment. New dimensions are offered regarding worker vulnerabilities, given the recent growth of the IWP phenomenon.
798

ADVANCED TRANSFER LEARNING IN DOMAINS WITH LOW-QUALITY TEMPORAL DATA AND SCARCE LABELS

Abdel Hai, Ameen, 0000-0001-5173-5291 12 1900 (has links)
Numerous of high-impact applications involve predictive modeling of real-world data. This spans from hospital readmission prediction for enhanced patient care up to event detection in power systems for grid stabilization. Developing performant machine learning models necessitates extensive high-quality training data, ample labeled samples, and training and testing datasets derived from identical distributions. Though, such methodologies may be impractical in applications where obtaining labeled data is expensive or challenging, the quality of data is low, or when challenged with covariate or concept shifts. Our emphasis was on devising transfer learning methods to address the inherent challenges across two distinct applications.We delved into a notably challenging transfer learning application that revolves around predicting hospital readmission risks using electronic health record (EHR) data to identify patients who may benefit from extra care. Readmission models based on EHR data can be compromised by quality variations due to manual data input methods. Utilizing high-quality EHR data from a different hospital system to enhance prediction on a target hospital using traditional approaches might bias the dataset if distributions of the source and target data are different. To address this, we introduce an Early Readmission Risk Temporal Deep Adaptation Network, ERR-TDAN, for cross-domain knowledge transfer. A model developed using target data from an urban academic hospital was enhanced by transferring knowledge from high-quality source data. Given the success of our method in learning from data sourced from multiple hospital systems with different distributions, we further addressed the challenge and infeasibility of developing hospital-specific readmission risk prediction models using data from individual hospital systems. Herein, based on an extension of the previous method, we introduce an Early Readmission Risk Domain Generalization Network, ERR-DGN. It is adept at generalizing across multiple EHR data sources and seamlessly adapting to previously unseen test domains. In another challenging application, we addressed event detection in electrical grids where dependencies are spatiotemporal, highly non-linear, and non-linear systems using high-volume field-recorded data from multiple Phasor Measurement Units (PMUs). Existing historical event logs created manually do not correlate well with the corresponding PMU measurements due to scarce and temporally imprecise labels. Extending event logs to a more complete set of labeled events is very costly and often infeasible to obtain. We focused on utilizing a transfer learning method tailored for event detection from PMU data to reduce the need for additional manual labeling. To demonstrate the feasibility, we tested our approach on large datasets collected from the Western and Eastern Interconnections of the U.S.A. by reusing a small number of carefully selected labeled PMU data from a power system to detect events from another. Experimental findings suggest that the proposed knowledge transfer methods for healthcare and power system applications have the potential to effectively address the identified challenges and limitations. Evaluation of the proposed readmission models show that readmission risk predictions can be enhanced when leveraging higher-quality EHR data from a different site, and when trained on data from multiple sites and subsequently applied to a novel hospital site. Moreover, labels scarcity in power systems can be addressed by a transfer learning method in conjunction with a semi-supervised algorithm that is capable of detecting events based on minimal labeled instances. / Computer and Information Science
799

Characterization of neurofluid flow using physics-guided enhancement of 4D flow MRI

Neal Minesh Patel (18429606) 24 April 2024 (has links)
<p dir="ltr">Cerebrospinal fluid (CSF) plays a diverse role within the skull including cushioning the brain, regulating intracranial pressure, and clearing metabolic waste via the glymphatic system. Disruptions in CSF flow have long been investigated for hydrocephalus-related diseases such as idiopathic normal pressure hydrocephalus (iNPH). Recently, changes in CSF flow have been implicated in neurodegenerative disorders such as Alzheimer’s disease (AD) and Parkinson’s disease. It remains difficult to obtain <i>in vivo </i>measurements of CSF flow which contribute to disease initiation, progression, and treatment. Three-directional phase-contrast MR imaging (4D flow MRI) has been used to measure CSF velocities within the cerebral ventricles. However, there remain challenges in balancing acquisition time, spatiotemporal resolution, and velocity-to-noise ratio. This is complicated by the low velocities and long relaxation times associated with CSF flow. Additionally, flow-derived metrics associated with cellular adaptations and transport rely on near-wall velocities which are poorly resolved and noisy. To address these challenges, we have applied physics-guided neural networks (PGNN) to super-resolve and denoise synthetic 4D flow MRI of CSF flow within the 3rd and 4th ventricles using novel physics-based loss functions. These loss functions are specifically designed to ensure that high-resolution estimations of flow fields are physically consistent and temporarily coherent. We apply these PGNN to various test cases including synthetically generated 4D flow MRI in the cerebral ventricles and vasculature, <i>in vitro</i> 4D flow MRI acquired at two resolutions in 3D printed phantoms of the 3rd and 4th ventricles, and in vivo 4D flow MRI in a healthy subject. Lastly, we apply these physics-guided networks to investigate blood flow through cerebral aneurysms. These techniques can empower larger studies investigating the coupling between arterial blood flow and CSF flow in conditions such as iNPH and AD.</p>
800

Stability of Embankments Founded on Soft Soil Improved with Deep-Mixing-Method Columns

Navin, Michael Patrick 25 August 2005 (has links)
Foundations constructed by the deep mixing method have been used to successfully support embankments, structures, and excavations in Japan, Scandinavia, the U.S., and other countries. The current state of practice is that design is based on deterministic analyses of settlement and stability, even though deep mixed materials are highly variable. Conservative deterministic design procedures have evolved to limit failures. Disadvantages of this approach include (1) designs with an unknown degree of conservatism and (2) contract administration problems resulting from unrealistic specifications for deep mixed materials. This dissertation describes research conducted to develop reliability-based design procedures for foundations constructed using the deep mixing method. The emphasis of the research and the included examples are for embankment support applications, but the principles are applicable to foundations constructed for other purposes. Reliability analyses for foundations created by the deep mixing method are described and illustrated using an example embankment. The deterministic stability analyses for the example embankment were performed using two methods: limit equilibrium analyses and numerical stress-strain analyses. An important finding from the research is that both numerical analyses and reliability analyses are needed to properly design embankments supported on deep mixed columns. Numerical analyses are necessary to address failure modes, such as column bending and tilting, that are not addressed by limit equilibrium analyses, which only cover composite shearing. Reliability analyses are necessary to address the impacts of variability of the deep mixed materials and other system components. Reliability analyses also provide a rational basis for establishing statistical specifications for deep mixed materials. Such specifications will simplify administration of construction contracts and reduce claims while still providing assurance that the design intent is satisfied. It is recommended that reliability-based design and statistically-based specifications be implemented in practice now. / Ph. D.

Page generated in 0.0593 seconds