• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
831

An Analysis of Short-Term Load Forecasting on Residential Buildings Using Deep Learning Models

Suresh, Sreerag 07 July 2020 (has links)
Building energy load forecasting is becoming an increasingly important task with the rapid deployment of smart homes, integration of renewables into the grid and the advent of decentralized energy systems. Residential load forecasting has been a challenging task since the residential load is highly stochastic. Deep learning models have showed tremendous promise in the fields of time-series and sequential data and have been successfully used in the field of short-term load forecasting at the building level. Although, other studies have looked at using deep learning models for building energy forecasting, most of those studies have looked at limited number of homes or an aggregate load of a collection of homes. This study aims to address this gap and serve as an investigation on selecting the better deep learning model architecture for short term load forecasting on 3 communities of residential buildings. The deep learning models CNN and LSTM have been used in the study. For 15-min ahead forecasting for a collection of homes it was found that homes with a higher variance were better predicted by using CNN models and LSTM showed better performance for homes with lower variances. The effect of adding weather variables on 24-hour ahead forecasting was studied and it was observed that adding weather parameters did not show an improvement in forecasting performance. In all the homes, deep learning models are shown to outperform the simple ANN model. / Master of Science / Building energy load forecasting is becoming an increasingly important task with the rapid deployment of smart homes, integration of renewables into the grid and the advent of decentralized energy systems. Residential load forecasting has been a challenging task since residential load is highly stochastic. Deep learning models have showed tremendous promise in the fields of time-series and sequential data and have been successfully used in the field of short-term load forecasting. Although, other studies have looked at using deep learning models for building energy forecasting, most of those studies have looked at only a single home or an aggregate load of a collection of homes. This study aims to address this gap and serve as an analysis on short term load forecasting on 3 communities of residential buildings. Detailed analysis on the model performances across all homes have been studied. Deep learning models have been used in this study and their efficacy is measured compared to a simple ANN model.
832

Segmenting Skin Lesion Attributes in Dermoscopic Images Using Deep Learing Algorithm for Melanoma Detection

Dong, Xu 09 1900 (has links)
Melanoma is the most deadly form of skin cancer worldwide, which causes the 75% of deaths related to skin cancer. National Cancer Institute estimated that 91,270 new case and 9,320 deaths are expected in 2018 caused by melanoma. Early detection of melanoma is the key for the treatment. The image technique to diagnose skin cancer is dermoscopy, which leads to improved diagnose accuracy compared to traditional ABCD criteria. But reading and examining dermoscopic images is a time-consuming and complex process. Therefore, computerized analysis methods of dermoscopic images have been developed to assist the visual interpretation of dermoscopic images. The automatic segmentation of skin lesion attributes is a key step in computerized analysis of dermoscopic images. The International Skin Imaging Collaboration (ISIC) hosted the 2018 Challenges to help the diagnosis of melanoma based on dermoscopic images. In this thesis, I develop a deep learning based approach to automatically segment the attributes from dermoscopic skin lesion images. The approach described in the thesis achieved the Jaccard index of 0.477 on the official test dataset, which ranked 5th place in the challenge. / Master of Science / Melanoma is the most deadly form of skin cancer worldwide, which causes the 75% of deaths related to skin cancer. Early detection of melanoma is the key for the treatment. The image technique to diagnose skin cancer is called dermoscopy. It has become increasingly conveniently to use dermoscopic device to image the skin in recent years. Dermoscopic lens are available in the market for individual customer. When coupling the dermoscopic lens with smartphones, people are be able to take dermoscopic images of their skin even at home. However, reading and examining dermoscopic images is a time-consuming and complex process. It requires specialists to examine the image, extract the features, and compare with criteria to make clinical diagnosis. The time-consuming image examination process becomes the bottleneck of fast diagnosis of melanoma. Therefore, computerized analysis methods of dermoscopic images have been developed to promote the melanoma diagnosis and to increase the survival rate and save lives eventually. The automatic segmentation of skin lesion attributes is a key step in computerized analysis of dermoscopic images. In this thesis, I developed a deep learning based approach to automatically segment the attributes from dermoscopic skin lesion images. The segmentation result from this approach won 5th place in a public competition. It has the potential to be utilized in clinic application in the future.
833

Learning to handle occlusion for motion analysis and view synthesis

Su, Shih-Yang 29 May 2020 (has links)
The ability to understand occlusion and disocclusion is critical in analyzing motion and forecasting changes. For example, when we see a car gradually blocks our view of a human figure, we know that either the car or the human is moving. We also know that the human behind the car will be visible again if we move to other positions. As many vision-based intelligent systems need to handle and react to visual data with potentially intensive motions, it is therefore beneficial to incorporate the occlusion reasoning into such systems. In this thesis, we study how we can improve the performance of vision-based deep learning models by harnessing the power of occlusion handling. We first visit the problem of optical flow estimation for motion analysis. We present a deep learning module that builds upon occlusion handling methods in classic Computer Vision literature. Our results show performance improvement in occluded regions on standard benchmarks, as well as real-world applications. We then examine the problem of view synthesis for 3D photography. We propose an inpainting method that leverages local color and depth context for novel view synthesis. We validate the proposed inpainting approach with a series of quantitative and qualitative experiments, and demonstrate promising results in predicting plausible content in occluded regions. / Master of Science / Human has the ability to understand occlusion, and make use of such knowledge to make predictions about motions and occluded contents. For example, when we see a car gradually blocks our view of a human figure, we know that either the car or the human is moving. We also know that the human behind the car will be visible again if we move to other positions. In this thesis, we study how we can replicate such an ability to artificial intelligence systems. We first investigate the effect of occlusion reasoning in the task of predicting motion. Our experimental results show that a system equipped with our occlusion reasoning module can better capture the motions happening in image sequences. Next, we examine the problem of hallucinating visual contents that are blocked in an image. We develop a model that can produce plausible content in occluded regions. In our experiments, we show that given one single RGB image with an estimated depth map, our model can produce a corresponding 3D photo by hallucinating the structures that are not visible in the image.
834

On Natural Motion Processing using Inertial Motion Capture and Deep Learning

Geissinger, John Herman 21 May 2020 (has links)
Human motion collected in real-world environments without instruction from researchers - or natural motion - is an understudied area of the field of motion capture that could increase the efficacy of assistive devices such as exoskeletons, robotics, and prosthetics. With this goal in mind, a natural motion dataset is presented in this thesis alongside algorithms for analyzing human motion. The dataset contains more than 36 hours of inertial motion capture data collected while the 16 participants went about their lives. The participants were not instructed on what actions to perform and interacted freely with real-world environments such as a home improvement store and a college campus. We apply our dataset in two experiments. The first is a study into how manual material handlers lift and bend at work, and what postures they tend to use and why. Workers rarely used symmetric squats and infrequently used symmetric stoops typically studied in lab settings. Instead, they used a variety of different postures that have not been well-characterized such as one-legged lifting and split-legged lifting. The second experiment is a study of how to infer human motion using limited information. We present methods for inferring human motion from sparse sensors using Transformers and Seq2Seq models. We found that Transformers perform better than Seq2Seq models in producing upper-body and full-body motion, but that each model can accurately infer human motion for a variety of postures like sitting, standing, kneeling, and bending given sparse sensor data. / Master of Science / To better design technology that can assist people in their daily lives, it is necessary to better understand how people move and act in the real-world with little to no instruction from researchers. Personal assistants such as Alexa and Google Assistant have benefited from what researchers call natural language processing. Similarly, natural motion processing will be useful for everyday assistive devices like prosthetics and exoskeletons. Unscripted human motion in real-world environments - or natural motion - has been made possible with recent advancements in motion capture technology. In this thesis, we present data from 16 participants who wore a suit that captures accurate human motion. The dataset contains more than 36 hours of unscripted human motion data in real-world environments that is usable by other researchers to develop technology and advance our understanding of human motion. In addition, we perform two experiments in this thesis. The first is a study into how manual material handlers lift and bend at work, and what postures they tend to use and why. The second is a study into how we can determine what a person's body is doing with a limited amount of information from only a few sensors. This study could be useful for making commercial devices like smartphones, smartwatches, and smartglasses more valuable and useful.
835

Zero and Few-Shot Concept Learning with Pre-Trained Embeddings

Moody, Jamison M. 21 April 2023 (has links) (PDF)
Neural networks typically struggle with reasoning tasks on out of domain data, something that humans can more easily adapt to. Humans come with prior knowledge of concepts and can segment their environment into building blocks (such as objects) that allow them to reason effectively in unfamiliar situations. Using this intuition, we train a network that utilizes fixed embeddings from the CLIP (Contrastive Language--Image Pre-training) model to do a simple task that the original CLIP model struggles with. The network learns concepts (such as "collide" and "avoid") in a supervised source domain in such a way that the network can adapt and identify similar concepts in a target domain with never-before-seen objects. Without any training in the target domain, we show a 11% accuracy improvement in recognizing concepts compared to the baseline zero-shot CLIP model. When provided with a few labels, this accuracy gap widens to 20%.
836

Figure Extraction from Scanned Electronic Theses and Dissertations

Kahu, Sampanna Yashwant 29 September 2020 (has links)
The ability to extract figures and tables from scientific documents can solve key use-cases such as their semantic parsing, summarization, or indexing. Although a few methods have been developed to extract figures and tables from scientific documents, their performance on scanned counterparts is considerably lower than on born-digital ones. To facilitate this, we propose methods to effectively extract figures and tables from Electronic Theses and Dissertations (ETDs), that out-perform existing methods by a considerable margin. Our contribution towards this goal is three-fold. (a) We propose a system/model for improving the performance of existing methods on scanned scientific documents for figure and table extraction. (b) We release a new dataset containing 10,182 labelled page-images spanning across 70 scanned ETDs with 3.3k manually annotated bounding boxes for figures and tables. (c) Lastly, we release our entire code and the trained model weights to enable further research (https://github.com/SampannaKahu/deepfigures-open). / Master of Science / Portable Document Format (PDF) is one of the most popular document formats. However, parsing PDF files is not a trivial task. One use-case of parsing PDF files is the search functionality on websites hosting scholarly documents (i.e., IEEE Xplore, etc.). Having the ability to extract figures and tables from a scholarly document helps this use-case, among others. Methods using deep learning exist which extract figures from scholarly documents. However, a large number of scholarly documents, especially the ones published before the advent of computers, have been scanned from hard paper copies into PDF. In particular, we focus on scanned PDF versions of long documents, such as Electronic Theses and Dissertations (ETDs). No experiments have been done yet that evaluate the efficacy of the above-mentioned methods on this scanned corpus. This work explores and attempts to improve the performance of these existing methods on scanned ETDs. A new gold standard dataset is created and released as a part of this work for figure extraction from scanned ETDs. Finally, the entire source code and trained model weights are made open-source to aid further research in this field.
837

Machine Learning Methods for Protein Model Quality Estimation

Shuvo, Md Hossain 21 December 2023 (has links)
Doctor of Philosophy / In my research, I developed protein model quality estimation methods aimed at evaluating the reliability of computationally predicted protein models in the absence of experimentally solved ground truth structures. These methods specifically focus on estimating errors within the protein models to quantify their structural accuracy. Recognizing that even the most advanced protein structure prediction techniques may produce models with errors, I also developed a complementary protein model refinement method. This refinement method iteratively optimizes the weakly modeled regions, guided by the error estimation module of my quality estimation approach. The development of these model quality estimation methods, therefore, not only offers valuable insights into the structural reliability of protein models but also contributes to optimizing the overall reliability of protein models generated by state-of-the-art computational methods.
838

ACADIA: Efficient and Robust Adversarial Attacks Against Deep Reinforcement Learning

Ali, Haider 05 January 2023 (has links)
Existing adversarial algorithms for Deep Reinforcement Learning (DRL) have largely focused on identifying an optimal time to attack a DRL agent. However, little work has been explored in injecting efficient adversarial perturbations in DRL environments. We propose a suite of novel DRL adversarial attacks, called ACADIA, representing AttaCks Against Deep reInforcement leArning. ACADIA provides a set of efficient and robust perturbation-based adversarial attacks to disturb the DRL agent's decision-making based on novel combinations of techniques utilizing momentum, ADAM optimizer (i.e., Root Mean Square Propagation or RMSProp), and initial randomization. These kinds of DRL attacks with novel integration of such techniques have not been studied in the existing Deep Neural Networks (DNNs) and DRL research. We consider two well-known DRL algorithms, Deep-Q Learning Network (DQN) and Proximal Policy Optimization (PPO), under Atari games and MuJoCo where both targeted and non-targeted attacks are considered with or without the state-of-the-art defenses in DRL (i.e., RADIAL and ATLA). Our results demonstrate that the proposed ACADIA outperforms existing gradient-based counterparts under a wide range of experimental settings. ACADIA is nine times faster than the state-of-the-art Carlini and Wagner (CW) method with better performance under defenses of DRL. / Master of Science / Artificial Intelligence (AI) techniques such as Deep Neural Networks (DNN) and Deep Reinforcement Learning (DRL) are prone to adversarial attacks. For example, a perturbed stop sign can force a self-driving car's AI algorithm to increase the speed rather than stop the vehicle. There has been little work developing attacks and defenses against DRL. In DRL, a DNN-based policy decides to take an action based on the observation of the environment and gets the reward in feedback for its improvements. We perturb that observation to attack the DRL agent. There are two main aspects to developing an attack on DRL. One aspect is to identify an optimal time to attack (when-to-attack?). The second aspect is to identify an efficient method to attack (how-to-attack?). To answer the second aspect, we propose a suite of novel DRL adversarial attacks, called ACADIA, representing AttaCks Against Deep reInforcement leArning. We consider two well-known DRL algorithms, Deep-Q Learning Network (DQN) and Proximal Policy Optimization (PPO), under DRL environments of Atari games and MuJoCo where both targeted and non-targeted attacks are considered with or without state-of-the-art defenses. Our results demonstrate that the proposed ACADIA outperforms state-of-the-art perturbation methods under a wide range of experimental settings. ACADIA is nine times faster than the state-of-the-art Carlini and Wagner (CW) method with better performance under the defenses of DRL.
839

Segmenting Electronic Theses and Dissertations By Chapters

Manzoor, Javaid Akbar 18 January 2023 (has links)
Master of Science / Electronic theses and dissertations (ETDs) are structured documents in which chapters are major components. There is a lack of any repository that contains chapter boundary details alongside these structured documents. Revealing these details of the documents can help increase accessibility. This research explores the manipulation of ETDs marked up using LaTeX to generate chapter boundaries. We use this to create a data set of 1,459 ETDs and their chapter boundaries. Additionally, for the task of automatic segmentation of unseen documents, we prototype three deep learning models that are trained using this data set. We hope to encourage researchers to incorporate LaTeX manipulation techniques to create similar data sets.
840

Leveraging Transformer Models and Elasticsearch to Help Prevent and Manage Diabetes through EFT Cues

Shah, Aditya Ashishkumar 16 June 2023 (has links)
Diabetes in humans is a long-term (chronic) illness that affects how our body converts food into energy. Approximately one in ten individuals residing in the United States is affected with diabetes and more than 90% of those have type 2 diabetes (T2D). Human bodies fail to produce insulin in type 1 diabetes, causing you to take insulin for survival. However, with type 2 diabetes, the body can't use insulin well. A proven way to manage diabetes is through a positive mindset and a healthy lifestyle. Several studies have been conducted at Virginia Tech and the University of Buffalo on discovering different helpful characteristics in a person's day-to-day life, which relate to important events. They consider Episodic Fu- ture Thinking (EFT), where participants identify several events/actions that might occur at multiple future time frames (1 month to 10 years) in text-based descriptions (cues). This re- search aims to detect content characteristics from these EFT cues. However, class imbalance often presents a challenging issue when dealing with such domain-specific data. To mitigate this issue, this research employs Elasticsearch to address data imbalance and enhance the machine learning (ML) pipeline for improved accuracy of predictions. By leveraging Elas- ticsearch and transformer models, this study constructs classifiers and regression models, which can be utilized to identify various content characteristics from the cues. To the best of our knowledge, this work represents the first such attempt to employ natural language processing (NLP) techniques to analyze EFT cues and establish a correlation between those characteristics and their impacts on decision-making and health outcomes. / Master of Science / Diabetes is a serious and long-term illness that impacts how the body converts food into energy. It affects around one in ten individuals residing in the United States, and over 90% of these individuals have type 2 diabetes (T2D). While a positive attitude and healthy lifestyle can help with management of diabetes, it is unclear exactly which mental attitudes most affect health outcomes. To gain a better understanding of this relationship, researchers from Virginia Tech and the University of Buffalo conducted multiple studies on Episodic Future Thinking (EFT), where participants identify several events or actions that could take place in the future. This research uses natural language processing (NLP) to analyze the descriptions of these events (cues) and identify different characteristics that relate to a person's day-to-day life. With the help of Elasticsearch and transformer models, this work handles the data imbalance and improves the model predictions for different categories within cues. Overall, this research has the potential to provide valuable insights that can impact their diabetes risk, potentially leading to better management and prevention strategies and treatments.

Page generated in 0.0564 seconds