• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1850
  • 57
  • 54
  • 38
  • 37
  • 37
  • 19
  • 13
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2668
  • 2668
  • 1104
  • 955
  • 832
  • 608
  • 579
  • 488
  • 487
  • 463
  • 438
  • 432
  • 411
  • 410
  • 373
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

A novel application of deep learning with image cropping: a smart cities use case for flood monitoring

Mishra, Bhupesh K., Thakker, Dhaval, Mazumdar, S., Neagu, Daniel, Gheorghe, Marian, Simpson, Sydney 13 February 2020 (has links)
Yes / Event monitoring is an essential application of Smart City platforms. Real-time monitoring of gully and drainage blockage is an important part of flood monitoring applications. Building viable IoT sensors for detecting blockage is a complex task due to the limitations of deploying such sensors in situ. Image classification with deep learning is a potential alternative solution. However, there are no image datasets of gullies and drainages. We were faced with such challenges as part of developing a flood monitoring application in a European Union-funded project. To address these issues, we propose a novel image classification approach based on deep learning with an IoT-enabled camera to monitor gullies and drainages. This approach utilises deep learning to develop an effective image classification model to classify blockage images into different class labels based on the severity. In order to handle the complexity of video-based images, and subsequent poor classification accuracy of the model, we have carried out experiments with the removal of image edges by applying image cropping. The process of cropping in our proposed experimentation is aimed to concentrate only on the regions of interest within images, hence leaving out some proportion of image edges. An image dataset from crowd-sourced publicly accessible images has been curated to train and test the proposed model. For validation, model accuracies were compared considering model with and without image cropping. The cropping-based image classification showed improvement in the classification accuracy. This paper outlines the lessons from our experimentation that have a wider impact on many similar use cases involving IoT-based cameras as part of smart city event monitoring platforms. / European Regional Development Fund Interreg project Smart Cities and Open Data REuse (SCORE).
362

The Automated Prediction of Solar Flares from SDO Images Using Deep Learning

Abed, Ali K., Qahwaji, Rami S.R., Abed, A. 21 March 2021 (has links)
No / In the last few years, there has been growing interest in near-real-time solar data processing, especially for space weather applications. This is due to space weather impacts on both space-borne and ground-based systems, and industries, which subsequently impacts our lives. In the current study, the deep learning approach is used to establish an automated hybrid computer system for a short-term forecast; it is achieved by using the complexity level of the sunspot group on SDO/HMI Intensitygram images. Furthermore, this suggested system can generate the forecast for solar flare occurrences within the following 24 h. The input data for the proposed system are SDO/HMI full-disk Intensitygram images and SDO/HMI full-disk magnetogram images. System outputs are the “Flare or Non-Flare” of daily flare occurrences (C, M, and X classes). This system integrates an image processing system to automatically detect sunspot groups on SDO/HMI Intensitygram images using active-region data extracted from SDO/HMI magnetogram images (presented by Colak and Qahwaji, 2008) and deep learning to generate these forecasts. Our deep learning-based system is designed to analyze sunspot groups on the solar disk to predict whether this sunspot group is capable of releasing a significant flare or not. Our system introduced in this work is called ASAP_Deep. The deep learning model used in our system is based on the integration of the Convolutional Neural Network (CNN) and Softmax classifier to extract special features from the sunspot group images detected from SDO/HMI (Intensitygram and magnetogram) images. Furthermore, a CNN training scheme based on the integration of a back-propagation algorithm and a mini-batch AdaGrad optimization method is suggested for weight updates and to modify learning rates, respectively. The images of the sunspot regions are cropped automatically by the imaging system and processed using deep learning rules to provide near real-time predictions. The major results of this study are as follows. Firstly, the ASAP_Deep system builds on the ASAP system introduced in Colak and Qahwaji (2009) but improves the system with an updated deep learning-based prediction capability. Secondly, we successfully apply CNN to the sunspot group image without any pre-processing or feature extraction. Thirdly, our system results are considerably better, especially for the false alarm ratio (FAR); this reduces the losses resulting from the protection measures applied by companies. Also, the proposed system achieves a relatively high scores for True Skill Statistics (TSS) and Heidke Skill Score (HSS).
363

A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262

Vasudevan, Vinod, Abdullatif, Amr R.A., Kabir, Sohag, Campean, Felician 10 December 2021 (has links)
Yes / Assuring safety and thereby certifying is a key challenge of many kinds of Machine Learning (ML) Models. ML is one of the most widely used technological solutions to automate complex tasks such as autonomous driving, traffic sign recognition, lane keep assist etc. The application of ML is making a significant contributions in the automotive industry, it introduces concerns related to the safety and security of these systems. ML models should be robust and reliable throughout and prove their trustworthiness in all use cases associated with vehicle operation. Proving confidence in the safety and security of ML-based systems and there by giving assurance to regulators, the certification authorities, and other stakeholders is an important task. This paper proposes a framework to handle uncertainties of ML model to improve the safety level and thereby certify the ML Models in the automotive industry.
364

Characterization of neurofluid flow using physics-guided enhancement of 4D flow MRI

Neal Minesh Patel (18429606) 24 April 2024 (has links)
<p dir="ltr">Cerebrospinal fluid (CSF) plays a diverse role within the skull including cushioning the brain, regulating intracranial pressure, and clearing metabolic waste via the glymphatic system. Disruptions in CSF flow have long been investigated for hydrocephalus-related diseases such as idiopathic normal pressure hydrocephalus (iNPH). Recently, changes in CSF flow have been implicated in neurodegenerative disorders such as Alzheimer’s disease (AD) and Parkinson’s disease. It remains difficult to obtain <i>in vivo </i>measurements of CSF flow which contribute to disease initiation, progression, and treatment. Three-directional phase-contrast MR imaging (4D flow MRI) has been used to measure CSF velocities within the cerebral ventricles. However, there remain challenges in balancing acquisition time, spatiotemporal resolution, and velocity-to-noise ratio. This is complicated by the low velocities and long relaxation times associated with CSF flow. Additionally, flow-derived metrics associated with cellular adaptations and transport rely on near-wall velocities which are poorly resolved and noisy. To address these challenges, we have applied physics-guided neural networks (PGNN) to super-resolve and denoise synthetic 4D flow MRI of CSF flow within the 3rd and 4th ventricles using novel physics-based loss functions. These loss functions are specifically designed to ensure that high-resolution estimations of flow fields are physically consistent and temporarily coherent. We apply these PGNN to various test cases including synthetically generated 4D flow MRI in the cerebral ventricles and vasculature, <i>in vitro</i> 4D flow MRI acquired at two resolutions in 3D printed phantoms of the 3rd and 4th ventricles, and in vivo 4D flow MRI in a healthy subject. Lastly, we apply these physics-guided networks to investigate blood flow through cerebral aneurysms. These techniques can empower larger studies investigating the coupling between arterial blood flow and CSF flow in conditions such as iNPH and AD.</p>
365

Micromechanical Behavior of Fiber-Reinforced Composites using Finite Element Simulation and Deep Learning

Sepasdar, Reza 07 October 2021 (has links)
This dissertation studies the micromechanical behavior of high-performance carbon fiber-reinforced polymer (CFRP) composites through high-fidelity numerical simulations. We investigated multiple transverse cracking of cross-ply CFRP laminates on the microstructure level through simulating large numerical models. Such an investigation demands an efficient numerical framework along with significant computational power. Hence, an efficient numerical framework was developed for simulating 2-D representations of CFRP composites' microstructure. The framework utilizes a nonlinear interface-enriched generalized finite element method (IGFEM) scheme which significantly decreases the computational cost. The framework was also designed to be fast and memory-efficient to enable simulating large numerical models. By utilizing the developed framework, the impacts of a few parameters on the evolution of transverse crack density in cross-ply CFRP laminates were studied. The considered parameters were characteristics of fiber/matrix cohesive interfaces, matrix stiffness, $0^{circ}$~plies longitudinal stiffness. We also developed a micromechanical framework for efficient and accurate simulation of damage propagation and failure in aligned discontinuous carbon fiber-reinforced composites under loading along the fibers' direction. The framework was validated based on the experimental results of a recently developed 3-D printed aligned discontinuous carbon fiber-reinforced composite as the composite of interest. The framework was then utilized to investigate the impacts of a few parameters of the constitutive equations on the strength and failure pattern of the composites of interest. This dissertation also contributes towards improving the computational efficiency of CFRP composites' simulations. We exhaustively investigated the cause of a convergence difficulty in finite element analyses caused by cohesive zone models (CZMs) which are commonly used to simulate fiber/matrix interfaces in CFRP composites. The CZMs' convergence difficulty significantly increases the computational burden. For the first time, we explained the root of the convergence difficulty and proposed a simple technique to overcome the convergence issue. The proposed technique outperformed the existing methods in terms of accuracy and computational cost. We also proposed a deep learning framework for predicting full-field distributions of mechanical responses in 2-D representations of CFRP composites based on the geometry of the microstructures. The deep learning framework can be used as a surrogate to the expensive and time-consuming finite element simulations. The proposed framework was able to accurately predict the stress distribution at an early stage of damage initiation and the failure pattern in representations of CFRP composites microstructure under transverse tension. / Doctor of Philosophy / Carbon fiber-reinforced polymers (CFRPs) are materials that are lightweight with excellent mechanical performance. Hence, these materials have a wide range of applications in various industries such as aerospace, automotive, and civil engineering. The extensive use of CFRPs has made them an active area of research and there have been great efforts to better understand and improve the mechanical properties of these materials over the past few decades. Therefore, CFRP materials and their manufacturing process are constantly changing and new types of CFRPs are kept being developed. As a result, the mechanical behavior of CFRPs needs to be exhaustively investigated to provide guidelines for their optimal engineering design and indicate the future direction of manufacturing improvements. This dissertation studied the mechanical behavior of CFRPs through high-fidelity simulations. Two types of CFRP were investigated: laminates and 3-D printed CFRPs. Laminates are the most popular type of CFRPs which are commonly used to construct the body of aircraft. 3-D printed CFRPs are new types of material that are gaining traction due to their ability to construct structures with complex geometries at high speed and without direct human supervision. The numerical simulations of CFRPs under mechanical loading are time-consuming and require significant computational power even when run on a supercomputer. Hence, this dissertation also contributes to improving the computational efficiency of numerical simulations. To decrease the computational cost, we proposed a technique that can significantly speed up the numerical simulations of CFRPs. Moreover, we utilized artificial intelligence to develop a new framework that can be substituted for the expensive and time-consuming conventional numerical simulations to quickly predict specific mechanical responses of CFRPs.
366

Machine Learning and Field Inversion approaches to Data-Driven Turbulence Modeling

Michelen Strofer, Carlos Alejandro 27 April 2021 (has links)
There still is a practical need for improved closure models for the Reynolds-averaged Navier-Stokes (RANS) equations. This dissertation explores two different approaches for using experimental data to provide improved closure for the Reynolds stress tensor field. The first approach uses machine learning to learn a general closure model from data. A novel framework is developed to train deep neural networks using experimental velocity and pressure measurements. The sensitivity of the RANS equations to the Reynolds stress, required for gradient-based training, is obtained by means of both variational and ensemble methods. The second approach is to infer the Reynolds stress field for a flow of interest from limited velocity or pressure measurements of the same flow. Here, this field inversion is done using a Monte Carlo Bayesian procedure and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. The two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions. / Doctor of Philosophy / The Reynolds-averaged Navier-Stokes (RANS) equations are widely used to simulate fluid flows in engineering applications despite their known inaccuracy in many flows of practical interest. The uncertainty in the RANS equations is known to stem from the Reynolds stress tensor for which no universally applicable turbulence model exists. The computational cost of more accurate methods for fluid flow simulation, however, means RANS simulations will likely continue to be a major tool in engineering applications and there is still a need for improved RANS turbulence modeling. This dissertation explores two different approaches to use available experimental data to improve RANS predictions by improving the uncertain Reynolds stress tensor field. The first approach is using machine learning to learn a data-driven turbulence model from a set of training data. This model can then be applied to predict new flows in place of traditional turbulence models. To this end, this dissertation presents a novel framework for training deep neural networks using experimental measurements of velocity and pressure. When using velocity and pressure data, gradient-based training of the neural network requires the sensitivity of the RANS equations to the learned Reynolds stress. Two different methods, the continuous adjoint and ensemble approximation, are used to obtain the required sensitivity. The second approach explored in this dissertation is field inversion, whereby available data for a flow of interest is used to infer a Reynolds stress field that leads to improved RANS solutions for that same flow. Here, the field inversion is done via the ensemble Kalman inversion (EKI), a Monte Carlo Bayesian procedure, and the focus is on improving the inference by enforcing known physical constraints on the inferred Reynolds stress field. To this end, a method for enforcing boundary conditions on the inferred field is presented. While further development is needed, the two data-driven approaches explored and improved upon here demonstrate the potential for improved practical RANS predictions.
367

Summarizing Legal Depositions

Chakravarty, Saurabh 18 January 2021 (has links)
Documents like legal depositions are used by lawyers and paralegals to ascertain the facts pertaining to a case. These documents capture the conversation between a lawyer and a deponent, which is in the form of questions and answers. Applying current automatic summarization methods to these documents results in low-quality summaries. Though extensive research has been performed in the area of summarization, not all methods succeed in all domains. Accordingly, this research focuses on developing methods to generate high-quality summaries of depositions. As part of our work related to legal deposition summarization, we propose a solution in the form of a pipeline of components, each addressing a sub-problem; we argue that a pipeline based framework can be tuned to summarize documents from any domain. First, we developed methods to parse the depositions, accounting for different document formats. We were able to successfully parse both a proprietary and a public dataset with our methods. We next developed methods to anonymize the personal information present in the deposition documents; we achieve 95% accuracy on the anonymization using a random sampling based evaluation. Third, we developed an ontology to define dialog acts for the questions and answers present in legal depositions. Fourth, we developed classifiers based on this ontology and achieved F1-scores of 0.84 and 0.87 on the public and proprietary datasets, respectively. Fifth, we developed methods to transform a question-answer pair to a canonical/simple form. In particular, based on the dialog acts for the question and answer combination, we developed transformation methods using each of traditional NLP, and deep learning, techniques. We were able to achieve good scores on the ROUGE and semantic similarity metrics for most of the dialog act combinations. Sixth, we developed methods based on deep learning, heuristics, and machine translation to correct the transformed declarative sentences. The sentence correction improved the readability of the transformed sentences. Seventh, we developed a methodology to break a deposition into its topical aspects. An ontology for aspects was defined for legal depositions, and classifiers were developed that achieved an F1-score of 0.89. Eighth, we developed methods to segment the deposition into parts that have the same thematic context. The segments helped in augmenting candidate summary sentences with surrounding context, that leads to a more readable summary. Ninth, we developed a pipeline to integrate all of the methods, to generate summaries from the depositions. We were able to outperform the baseline and state of the art summarization methods in a majority of the cases based on the F1, Recall, and ROUGE-2 scores. The performance gains were statistically significant for all of the scores. The summaries generated by our system can be arranged based on the same thematic context or aspect and hence should be much easier to read and follow, compared to the baseline methods. As part of our future work, we will improve upon these methods. We will refine our methods to identify the important parts using additional documents related to a deposition. In addition, we will work to improve the compression ratio of the generated summaries by reducing the number of unimportant sentences. We will expand the training dataset to learn and tune the coverage of the aspects for various deponent types using empirical methods. Our system has demonstrated effectiveness in transforming a QA pair into a declarative sentence. Having such a capability could enable us to generate a narrative summary from the depositions, a first for legal depositions. We will also expand our dataset for evaluation to ensure that our methods are indeed generalizable, and that they work well when experts subjectively evaluate the quality of the deposition summaries. / Doctor of Philosophy / Documents in the legal domain are of various types. One set of documents includes trial and deposition transcripts. These documents capture the proceedings of a trial or a deposition by note-taking, often over many hours. They contain conversation sentences that are spoken during the trial or deposition and involve multiple actors. One of the greatest challenges with these documents is that generally, they are long. This is a source of pain for attorneys and paralegals who work with the information contained in the documents. Text summarization techniques have been successfully used to compress a document and capture the salient parts from it. They have also been able to reduce redundancy in summary sentences while focusing on coherence and proper sentence formation. Summarizing trial and deposition transcripts would be immensely useful for law professionals, reducing the time to identify and disseminate salient information in case related documents, as well as reducing costs and trial preparation time. Processing the deposition documents using traditional text processing techniques is a challenge because of their form. Having the deposition conversations transformed into a suitable declarative form where they can be easily comprehended can pave the way for the usage of extractive and abstractive summarization methods. As part of our work, we identified the different discourse structures present in the deposition in the form of dialog acts. We developed methods based on those dialog acts to transform the deposition into a declarative form. We were able to achieve an accuracy of 87% on the dialog act classification. We also were able to transform the conversational question-answer (QA) pairs into declarative forms for 10 of the top-11 dialog act combinations. Our transformation methods performed better in 8 out of the 10 QA pair types, when compared to the baselines. We also developed methods to classify the deposition QA pairs according to their topical aspects. We generated summaries using aspects by defining the relative coverage for each aspect that should be present in a summary. Another set of methods developed can segment the depositions into parts that have the same thematic context. These segments aid augmenting the candidate summary sentences, to create a summary where information is surrounded by associated context. This makes the summary more readable and informative; we were able to significantly outperform the state of the art methods, based on our evaluations.
368

Deep Learning for Enhancing Precision Medicine

Oh, Min 07 June 2021 (has links)
Most medical treatments have been developed aiming at the best-on-average efficacy for large populations, resulting in treatments successful for some patients but not for others. It necessitates the need for precision medicine that tailors medical treatment to individual patients. Omics data holds comprehensive genetic information on individual variability at the molecular level and hence the potential to be translated into personalized therapy. However, the attempts to transform omics data-driven insights into clinically actionable models for individual patients have been limited. Meanwhile, advances in deep learning, one of the most promising branches of artificial intelligence, have produced unprecedented performance in various fields. Although several deep learning-based methods have been proposed to predict individual phenotypes, they have not established the state of the practice, due to instability of selected or learned features derived from extremely high dimensional data with low sample sizes, which often results in overfitted models with high variance. To overcome the limitation of omics data, recent advances in deep learning models, including representation learning models, generative models, and interpretable models, can be considered. The goal of the proposed work is to develop deep learning models that can overcome the limitation of omics data to enhance the prediction of personalized medical decisions. To achieve this, three key challenges should be addressed: 1) effectively reducing dimensions of omics data, 2) systematically augmenting omics data, and 3) improving the interpretability of omics data. / Doctor of Philosophy / Most medical treatments have been developed aiming at the best-on-average efficacy for large populations, resulting in treatments successful for some patients but not for others. It necessitates the need for precision medicine that tailors medical treatment to individual patients. Biological data such as DNA sequences and snapshots of genetic activities hold comprehensive information on individual variability and hence the potential to accelerate personalized therapy. However, the attempts to transform data-driven insights into clinical models for individual patients have been limited. Meanwhile, advances in deep learning, one of the most promising branches of artificial intelligence, have produced unprecedented performance in various fields. Although several deep learning-based methods have been proposed to predict individual treatment or outcome, they have not established the state of the practice, due to the complexity of biological data and limited availability, which often result in overfitted models that may work on training data but not on test data or unseen data. To overcome the limitation of biological data, recent advances in deep learning models, including representation learning models, generative models, and interpretable models, can be considered. The goal of the proposed work is to develop deep learning models that can overcome the limitation of omics data to enhance the prediction of personalized medical decisions. To achieve this, three key challenges should be addressed: 1) effectively reducing the complexity of biological data, 2) generating realistic biological data, and 3) improving the interpretability of biological data.
369

Solutions to Passageways Detection in Natural Foliage with Biomimetic Sonar Robot

Wang, Ruihao 22 June 2022 (has links)
Numerous bats species have evolved biosonar to obtain information from their habitats with dense vegetation. Different from man-made sensors, such as stereo cameras and LiDAR, bats' biosonar has much lower spatial resolution and sampling rates. Their biosonar is capable of reliably finding narrow gaps in foliage to serve as a passageway to fly through. To investigate the sensory information under such capability, we have used a biomimetic sonar robot to collect the narrow gap echoes from an artificial hedge in a laboratory setup and from the natural foliage in outdoor environments respectively. The work in this dissertation presents the performance of a conventional energy approach and a deep-learning approach in the classification of echoes from foliage and gap. The deep-learning approach has better foliage versus passageway classification accuracy than the energy approach in both experiments, and it also shows good robustness than the latter one when dealing with data with great varieties in the outdoor experiments. A class activation mapping approach indicates that the initial rising flank inside the echo spectrogram contains critical information. This result corresponds to the neuromorphic spiking model which could be simplified as times where the echo amplitude crosses a certain threshold in a certain frequency range. With these findings, it could be demonstrated that the sensory information in clutter echoes plays an important role in detecting passageways in foliage regardless of the wider beamwith than the passageway geometry. / Doctor of Philosophy / Many bats species are able to navigate and hunt in habitats with dense vegetation based on trains of biosonar echoes as their primary sources for sensory information on the environment. Drones equipped with man-made sensory systems such as optical, thermal, or LiDAR sensors, still face challenges when navigating in dense foliage. Bats are not only able to achieve higher reliability in detecting narrow gaps but accomplish this with much lower spatial resolutions and data rates than those of man-made sensors. To study which sensory information is accessible to bat biosonar for detecting passageways in foliage, a robot consisting of a biomimetic sonar and a camera system has been used to collect a large number of echoes and corresponding images (∼130k samples) from an artificial hedge constructed in the laboratory and various natural foliage targets found outdoors. We have applied a conventional energy approach which is widely used in engineered sonar but is limited by the biosonar's wide beamwidth and only achieves a foliage-versus-passageway classification accuracy of ∼70%. To deal with this situation, a deep-learning approach has been used to improve performance. Besides that, a transparent AI approach has been applied to overcome the black-box property and highlight the region of interest of the deep-learning classifier. The results achieved in detecting passageways were closely matched between the artificial hedge in the laboratory setup and the field data. With the best classification accuracy of 97.13% (artificial hedge) and 96.64% (field data) by the deep-learning approach, this work indicates the potential of exploring sensory information based on clutter echoes from complex environments for detecting passageways in foliage.
370

A Deep-learning based Approach for Foot Placement Prediction

Lee, Sung-Wook 24 May 2023 (has links)
Foot placement prediction can be important for exoskeleton and prosthesis controllers, human-robot interaction, or body-worn systems to prevent slips or trips. Previous studies investigating foot placement prediction have been limited to predicting foot placement during the swing phase, and do not fully consider contextual information such as the preceding step or the stance phase before push-off. In this study, a deep learning-based foot placement prediction approach was proposed, where the deep learning models were designed to sequentially process data from three IMU sensors mounted on pelvis and feet. The raw sensor data are pre-processed to generate multi-variable time-series data for training two deep learning models, where the first model estimates the gait progression and the second model subsequently predicts the next foot placement. The ground truth gait phase data and foot placement data are acquired from a motion capture system. Ten healthy subjects were invited to walk naturally at different speeds on a treadmill. In cross-subject learning, the trained models had a mean distance error of 5.93 cm for foot placement prediction. In single-subject learning, the prediction accuracy improved with additional training data, and a mean distance error of 2.60 cm was achieved by fine-tuning the cross-subject validated models with the target subject data. Even from 25-81% in the gait cycle, mean distance errors were only 6.99 cm and 3.22 cm for cross-subject learning and single-subject learning, respectively / Master of Science / This study proposes a new approach for predicting where a person's foot will land during walking, which could be useful in controlling robots and wearable devices that work with humans to prevent events such as slips and falls and allow for more smooth human-robot interactions. Although foot placement prediction has great potential in various domains, current works in this area are limited in terms of practicality and accuracy. The proposed approach uses data from inertial sensors attached to the pelvis and feet, and two deep learning models are trained to estimate the person's walking pattern and predict their next foot placement. The approach was tested on ten healthy individuals walking at different speeds on a treadmill, and achieved state-of-the-arts results. The results suggest that this approach could be a promising method when sufficient data from multiple people are available.

Page generated in 0.1281 seconds