• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 14
  • 12
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 182
  • 182
  • 92
  • 47
  • 36
  • 32
  • 31
  • 25
  • 25
  • 25
  • 24
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Microstructure Representation and Prediction via Convolutional Neural Network-Based Texture Representation and Synthesis, Towards Process Structure Linkage

Han, Yi 19 May 2021 (has links)
Metal additive manufacturing (AM) provides a platform for microstructure optimization via process control, the ability to model the evolution of microstructures from changes in processing condition or even predict the microstructures from given processing condition would greatly reduce the time frame and the cost of the optimization process. In 1, we present a deep learning framework to quantitatively analyze the microstructural variations of metals fabricated by AM under different processing conditions. We also demonstrate the capability of predicting new microstructures from the representation with deep learning and we can explore the physical insights of the implicitly expressed microstructure representations. We validate our framework using samples fabricated by a solid-state AM technology, additive friction stir deposition, which typically results in equiaxed microstructures. In 2, we further improve and generalize the generating framework, a set of metrics is used to quantitatively analyze the effectiveness of the generation by comparing the microstructure characteristics between the generated samples and the originals. We also take advantage of image processing techniques to aid the calculation of metrics that require grain segmentation. / Master of Science / Different from the traditional manufacturing technique which removes material to form the desired shape, additive manufacturing (AM) adds material together to form the shapes usually layer by layer. AM which is sometimes also referred to as 3-D printing enables the optimization of material property through changing the processing conditions. The microstructure is structures formed by materials on a microscopic scale. Crystals like metal usually form a crystalline structure composed of grains where atoms have the same orientation. Especially for metal AM, changes in the processing condition will usually result in changes in microstructures and material properties. To better optimize for the desired material properties, in 1 we present a microstructure representation method that allows projection of microstructure onto the representation space and prediction from an arbitrary point from the representation space. This representation method allows us to better analyze the changes in microstructure in relation to the changes in processing conditions. In 2, we validate the representation and prediction using EBSD data collected from copper samples manufactured with AM under different processing conditions.
92

Material-Specific Computed Tomography for Molecular X-Imaging in Biomedical Research

Dong, Xu 08 April 2019 (has links)
X-ray Computed Tomography (CT) imaging has been playing a central role in clinical practice since it was invented in 1972. However, the traditional x-ray CT technique fails to distinguish different materials with similar density, especially for biological tissues. The lack of a quantitative imaging representation has constrained the application of CT technique from a broadening application such as personal or precision medicine. Therefore, my major thesis statement is to develop novel material-specific CT imaging techniques for molecular imaging in biological bodies. To achieve the goal, comprehensive studies were conducted to investigate three different techniques: x-ray fluorescence molecular imaging, material identification (specification) from photon counting CT, and photon counting CT data distortion correction approach based on deep learning. X-ray fluorescence molecular imaging (XFMI) has shown great promise as a low-cost molecular imaging modality for clinical and pre-clinical applications with high sensitivity. In this study, the effects of excitation beam spectrum on the molecular sensitivity of XFMI were experimentally investigated, by quantitatively deriving minimum detectable concentration (MDC) under a fixed surface entrance dose of 200 mR at three different excitation beam spectra. The result shows that the MDC can be readily increased by a factor of 5.26 via excitation spectrum optimization. Furthermore, a numerical model was developed and validated by the experimental data (≥0.976). The numerical model can be used to optimize XFMI system configurations to further improve the molecular sensitivity. Findings from this investigation could find applications for in vivo pre-clinical small-animal XFMI in the future. PCCT is an emerging technique that has the ability to distinguish photon energy and generate much richer image data that contains x-ray spectral information compared to conventional CT. In this study, a physics model was developed based on x-ray matter interaction physics to calculate the effective atomic number () and effective electron density () from PCCT image data for material identification. As the validation of the physics model, the and were calculated under various energy conditions for many materials. The relative standard deviations are mostly less than 1% (161 out of 168) shows that the developed model obtains good accuracy and robustness to energy conditions. To study the feasibility of applying the model with PCCT image data for material identification, both PCCT system numerical simulation and physical experiment were conducted. The result shows different materials can be clearly identified in the − map (with relative error ≤8.8%). The model has the value to serve as a material identification scheme for PCCT system for practical use in the future. As PCCT appears to be a significant breakthrough in CT imaging field, there exists severe data distortion problem in PCCT, which greatly limits the application of PCCT in practice. Lately, deep learning (DL) neural network has demonstrated tremendous success in medical imaging field. In this study, a deep learning neural network based PCCT data distortion correction method was proposed. When applying the algorithm to process the test dataset data, the accuracy of the PCCT data can be greatly improved (RMSE improved 73.7%). Compared with traditional data correction approaches such as maximum likelihood, the deep learning approach demonstrate superiority in terms of RMSE, SSIM, PSNR, and most importantly, runtime (4053.21 sec vs. 1.98 sec). The proposed method has the potential to facilitate the PCCT studies and applications in practice. / Doctor of Philosophy / X-ray Computed Tomography (CT) has played a central role in clinical imaging since it was invented in 1972. It has distinguishing characteristics of being able to generate three dimensional images with comprehensive inner structural information in fast speed (less than one second). However, traditional CT imaging lacks of material-specific capability due to the mechanism of image formation, which makes it cannot be used for molecular imaging. Molecular imaging plays a central role in present and future biomedical research and clinical diagnosis and treatment. For example, imaging of biological processes and molecular markers can provide unprecedented rich information, which has huge potentials for individualized therapies, novel drug design, earlier diagnosis, and personalized medicine. Therefore there exists a pressing need to enable the traditional CT imaging technique with material-specific capability for molecular imaging purpose. This dissertation conducted comprehensive study to separately investigate three different techniques: x-ray fluorescence molecular imaging, material identification (specification) from photon counting CT, and photon counting CT data distortion correction approach based on deep learning. X-ray fluorescence molecular imaging utilizes fluorescence signal to achieve molecular imaging in CT; Material identification can be achieved based on the rich image data from PCCT; The deep learning based correction method is an efficient approach for PCCT data distortion correction, and furthermore can boost its performance on material identification. With those techniques, the material-specific capability of CT can be greatly enhanced and the molecular imaging can be approached in biological bodies.
93

Addressing Challenges of Modern News Agencies via Predictive Modeling, Deep Learning, and Transfer Learning

Keneshloo, Yaser 22 July 2019 (has links)
Today's news agencies are moving from traditional journalism, where publishing just a few news articles per day was sufficient, to modern content generation mechanisms, which create more than thousands of news pieces every day. With the growth of these modern news agencies comes the arduous task of properly handling this massive amount of data that is generated for each news article. Therefore, news agencies are constantly seeking solutions to facilitate and automate some of the tasks that have been previously done by humans. In this dissertation, we focus on some of these problems and provide solutions for two broad problems which help a news agency to not only have a wider view of the behaviour of readers around the article but also to provide an automated tools to ease the job of editors in summarizing news articles. These two disjoint problems are aiming at improving the users' reading experience by helping the content generator to monitor and focus on poorly performing content while allow them to promote the good-performing ones. We first focus on the task of popularity prediction of news articles via a combination of regression, classification, and clustering models. We next focus on the problem of generating automated text summaries for a long news article using deep learning models. The first problem aims at helping the content developer in understanding of how a news article is performing over the long run while the second problem provides automated tools for the content developers to generate summaries for each news article. / Doctor of Philosophy / Nowadays, each person is exposed to an immense amount of information from social media, blog posts, and online news portals. Among these sources, news agencies are one of the main content providers for each person around the world. Contemporary news agencies are moving from traditional journalism to modern techniques from different angles. This is achieved either by building smart tools to track the behaviour of readers’ reaction around a specific news article or providing automated tools to facilitate the editor’s job in providing higher quality content to readers. These systems should not only be able to scale well with the growth of readers but also they have to be able to process ad-hoc requests, precisely since most of the policies and decisions in these agencies are taken around the result of these analytical tools. As part of this new movement towards adapting new technologies for smart journalism, we have worked on various problems with The Washington Post news agency on building tools for predicting the popularity of a news article and automated text summarization model. We develop a model that monitors each news article after its publication and provide prediction over the number of views that this article will receive within the next 24 hours. This model will help the content creator to not only promote potential viral article in the main page of the web portal or social media, but also provide intuition for editors on potential poorly performing articles so that they can edit the content of those articles for better exposure. On the other hand, current news agencies are generating more than a thousands news articles per day and generating three to four summary sentences for each of these news pieces not only become infeasible in the near future but also very expensive and time-consuming. Therefore, we also develop a separate model for automated text summarization which generates summary sentences for a news article. Our model will generate summaries by selecting the most salient sentence in the news article and paraphrase them to shorter sentences that could represent as a summary sentence for the entire document.
94

Vehicle Detection in Deep Learning

Xiao, Yao 08 July 2019 (has links)
Computer vision techniques are becoming increasingly popular. For example, face recognition is used to help police find criminals, vehicle detection is used to prevent drivers from serious traffic accidents, and written word recognition is used to convert written words into printed words. With the rapid development of vehicle detection given the use of deep learning techniques, there are still concerns about the performance of state-of-the-art vehicle detection techniques. For example, state-of-the-art vehicle detectors are restricted by the large variation of scales. People working on vehicle detection are developing techniques to solve this problem. This thesis proposes an advanced vehicle detection model, adopting one of the classical neural networks, which are the residual neural network and the region proposal network. The model utilizes the residual neural network as a feature extractor and the region proposal network to detect the potential objects' information. / Master of Science / Computer vision techniques are becoming increasingly popular. For example, face recognition is used to help police find criminals, vehicle detection is used to prevent drivers from serious traffic accidents, and written word recognition is used to convert written words into printed words. With the rapid development of vehicle detection given the use of deep learning techniques, there are still concerns about the performance of state-of-the art vehicle detection techniques. For example, state-of-the-art vehicle detectors are restricted by the large variation of scales. People working on vehicle detection are developing techniques to solve this problem. This thesis proposes an advanced vehicle detection model, utilizing deep learning techniques to detect the potential objects’ information.
95

The Art of Deep Connection - Towards Natural and Pragmatic Conversational Agent Interactions

Ray, Arijit 12 July 2017 (has links)
As research in Artificial Intelligence (AI) advances, it is crucial to focus on having seamless communication between humans and machines in order to effectively accomplish tasks. Smooth human-machine communication requires the machine to be sensible and human-like while interacting with humans, while simultaneously being capable of extracting the maximum information it needs to accomplish the desired task. Since a lot of the tasks required to be solved by machines today involve the understanding of images, training machines to have human-like and effective image-grounded conversations with humans is one important step towards achieving this goal. Although we now have agents that can answer questions asked for images, they are prone to failure from confusing input, and cannot ask clarification questions, in turn, to extract the desired information from humans. Hence, as a first step, we direct our efforts towards making Visual Question Answering agents human-like by making them resilient to confusing inputs that otherwise do not confuse humans. Not only is it crucial for a machine to answer questions reasonably, it should also know how to ask questions sequentially to extract the desired information it needs from a human. Hence, we introduce a novel game called the Visual 20 Questions Game, where a machine tries to figure out a secret image a human has picked by having a natural language conversation with the human. Using deep learning techniques like recurrent neural networks and sequence-to-sequence learning, we demonstrate scalable and reasonable performances on both the tasks. / Master of Science
96

A Deep Learning Approach to Predict Full-Field Stress Distribution in Composite Materials

Sepasdar, Reza 17 May 2021 (has links)
This thesis proposes a deep learning approach to predict stress at various stages of mechanical loading in 2-D representations of fiber-reinforced composites. More specifically, the full-field stress distribution at elastic and at an early stage of damage initiation is predicted based on the microstructural geometry. The required data set for the purposes of training and validation are generated via high-fidelity simulations of several randomly generated microstructural representations with complex geometries. Two deep learning approaches are employed and their performances are compared: fully convolutional generator and Pix2Pix translation. It is shown that both the utilized approaches can well predict the stress distributions at the designated loading stages with high accuracy. / M.S. / Fiber-reinforced composites are material types with excellent mechanical performance. They form the major material in the construction of space shuttles, aircraft, fancy cars, etc., the structures that are designed to be lightweight and at the same time extremely stiff and strong. Due to the broad application, especially in the sensitives industries, fiber-reinforced composites have always been a subject of meticulous research studies. The research studies to better understand the mechanical behavior of these composites has to be conducted on the micro-scale. Since the experimental studies on micro-scale are expensive and extremely limited, numerical simulations are normally adopted. Numerical simulations, however, are complex, time-consuming, and highly computationally expensive even when run on powerful supercomputers. Hence, this research aims to leverage artificial intelligence to reduce the complexity and computational cost associated with the existing high-fidelity simulation techniques. We propose a robust deep learning framework that can be used as a replacement for the conventional numerical simulations to predict important mechanical attributes of the fiber-reinforced composite materials on the micro-scale. The proposed framework is shown to have high accuracy in predicting complex phenomena including stress distributions at various stages of mechanical loading.
97

Naturally Generated Decision Trees for Image Classification

Ravi, Sumved Reddy 31 August 2021 (has links)
Image classification has been a pivotal area of research in Deep Learning, with a vast body of literature working to tackle the problem, constantly striving to achieve higher accuracies. This push to reach achieve greater prediction accuracy however, has further exacerbated the black box phenomenon which is inherent of neural networks, and more for so CNN style deep architectures. Likewise, it has lead to the development of highly tuned methods, suitable only for a specific data sets, requiring significant work to alter given new data. Although these models are capable of producing highly accurate predictions, we have little to no ability to understand the decision process taken by a network to reach a conclusion. This factor poses a difficulty in use cases such as medical diagnostics tools or autonomous vehicles, which require insight into prediction reasoning to validate a conclusion or to debug a system. In essence, modern applications which utilize deep networks are able to learn to produce predictions, but lack interpretability and a deeper understanding of the data. Given this key point, we look to decision trees, opposite in nature to deep networks, with a high level of interpretability but a low capacity for learning. In our work we strive to merge these two techniques as a means to maintain the capacity for learning while providing insight into the decision process. More importantly, we look to expand the understanding of class relationships through a tree architecture. Our ultimate goal in this work is to create a technique able to automatically create a visual feature based knowledge hierarchy for class relations, applicable broadly to any data set or combination thereof. We maintain these goals in an effort to move away from specific systems and instead toward artificial general intelligence (AGI). AGI requires a deeper understanding over a broad range of information, and more so the ability to learn new information over time. In our work we embed networks of varying sizes and complexity within decision trees on a node level, where each node network is responsible for selecting the next branch path in the tree. Each leaf node represents a single class and all parent and ancestor nodes represent groups of classes. We designed the method such that classes are reasonably grouped by their visual features, where parent and ancestor nodes represent hidden super classes. Our work aims to introduce this method as a small step towards AGI, where class relations are understood through an automatically generated decision tree (representing a class hierarchy), capable of accurate image classification. / Master of Science / Many modern day applications make use of deep networks for image classification. Often these networks are incredibly complex in architecture, and applicable only for specific tasks and data. Standard approaches use just a neural network to produce predictions. However, the internal decision process of the network remains a black box due to the nature of the technique. As more complex human related applications, such as medical image diagnostic tools or autonomous driving software, are being created, they require an understanding of reasoning behind a prediction. To provide this insight into the prediction reasoning, we propose a technique which merges decision trees and deep networks. Tested on the MNIST image data set we were able to achieve an accuracy over 99.0%. We were also able to achieve an accuracy over 73.0% on the CIFAR-10 image data set. Our method is found to create decision trees that are easily understood and are reasonably capable of image classification.
98

Deep Learning for Biological Problems

Elmarakeby, Haitham Abdulrahman 14 June 2017 (has links)
The last decade has witnessed a tremendous increase in the amount of available biological data. Different technologies for measuring the genome, epigenome, transcriptome, proteome, metabolome, and microbiome in different organisms are producing large amounts of high-dimensional data every day. High-dimensional data provides unprecedented challenges and opportunities to gain a better understanding of biological systems. Unlike other data types, biological data imposes more constraints on researchers. Biologists are not only interested in accurate predictive models that capture complex input-output relationships, but they also seek a deep understanding of these models. In the last few years, deep models have achieved better performance in computational prediction tasks compared to other approaches. Deep models have been extensively used in processing natural data, such as images, text, and recently sound. However, application of deep models in biology is limited. Here, I propose to use deep models for output prediction, dimension reduction, and feature selection of biological data to get better interpretation and understanding of biological systems. I demonstrate the applicability of deep models in a domain that has a high and direct impact on health care. In this research, novel deep learning models have been introduced to solve pressing biological problems. The research shows that deep models can be used to automatically extract features from raw inputs without the need to manually craft features. Deep models are used to reduce the dimensionality of the input space, which resulted in faster training. Deep models are shown to have better performance and less variant output when compared to other shallow models even when an ensemble of shallow models is used. Deep models are shown to be able to process non-classical inputs such as sequences. Deep models are shown to be able to naturally process input sequences to automatically extract useful features. / Ph. D.
99

The Impact of Corporate Crisis on Stock Returns: An Event-driven Approach

Song, Ziqian 25 August 2020 (has links)
Corporate crisis events such as cyber attacks, executive scandals, facility accidents, fraud, and product recalls can damage customer trust and firm reputation severely, which may lead to tremendous loss in sales and firm equity value. My research aims to integrate information available on the market to assist firms in tackling crisis events, and to provide insight for better decision making. We first study the impact of crisis events on firm performance. We build a hybrid deep learning model that utilizes information from financial news, social media, and historical stock prices to predict firm stock performance during firm crisis events. We develop new methodologies that can extract, select, and represent useful features from textual data. Our hybrid deep learning model achieves 68.8% prediction accuracy for firm stock movements. Furthermore, we explore the underlying mechanisms behind how stakeholders adopt and propagate event information on social media, as well as how this would impact firm stock movements during such events. We adopt an extended epidemiology model, SEIZ, to simulate the information propagation on social media during a crisis. The SEIZ model classifies people into four states (susceptible, exposed, infected, and skeptical). By modeling the propagation of firm-initiated information and user-initiated information on Twitter, we simulate the dynamic process of Twitter stakeholders transforming from one state to another. Based on the modeling results, we quantitatively measure how stakeholders adopt firm crisis information on Twitter over time. We then empirically evaluate the impact of different information adoption processes on firm stock performance. We observe that investors often react very positively when a higher portion of stakeholders adopt the firm-initiated information on Twitter, and negatively when a higher portion of stakeholders adopt user-initiated information. Additionally, we try to identify features that can indicate the firm stock movement during corporate events. We adopt Layer-wised Relevance Propagation (LRP) to extract language features that can be the predictive variables for stock surge and stock plunge. Based on our trained hybrid deep learning model, we generate relevance scores for language features in news titles and tweets, which can indicate the amount of contributions these features made to the final predictions of stock surge and plunge. / Doctor of Philosophy / Corporate crisis events such as cyber attacks, executive scandals, facility accidents, fraud, and product recalls can damage customer trust and firm reputation severely, which may lead to tremendous loss in sales and firm equity value. My research aims to integrate information available on the market to assist firms in tackling crisis events and providing insight for better decision making. We first study the impact of crisis events on firm performance. We investigate five types of crisis events for SandP 500 companies, with 14,982 related news titles and 4.3 million relevant tweets. We build an event-driven hybrid deep learning model that utilizes information from financial news, social media, and historical stock prices to predict firm stock performance during firm crisis events. Furthermore, we explore how stakeholders adopt and propagate event information on social media, as well as how this would impact firm stock movements during the events. Social media has become an increasingly important channel for corporate crisis management. However, little is known on how crisis information propagates on social media. We observe that investors often react very positively when a higher portion of stakeholders adopt the firm-initiated information on Twitter, and negatively when a higher portion of stakeholders adopt user-initiated information. In addition, we find that the language used in the crisis news and social media discussions can have surprising predictive power on the firm stock. Thus, we develop a methodology to identify the importance of text features associated with firm performance during crisis events, such as predictive words or phrases.
100

Product Defect Discovery and Summarization from Online User Reviews

Zhang, Xuan 29 October 2018 (has links)
Product defects concern various groups of people, such as customers, manufacturers, government officials, etc. Thus, defect-related knowledge and information are essential. In keeping with the growth of social media, online forums, and Internet commerce, people post a vast amount of feedback on products, which forms a good source for the automatic acquisition of knowledge about defects. However, considering the vast volume of online reviews, how to automatically identify critical product defects and summarize the related information from the huge number of user reviews is challenging, even when we target only the negative reviews. As a kind of opinion mining research, existing defect discovery methods mainly focus on how to classify the type of product issues, which is not enough for users. People expect to see defect information in multiple facets, such as product model, component, and symptom, which are necessary to understand the defects and quantify their influence. In addition, people are eager to seek problem resolutions once they spot defects. These challenges cannot be solved by existing aspect-oriented opinion mining models, which seldom consider the defect entities mentioned above. Furthermore, users also want to better capture the semantics of review text, and to summarize product defects more accurately in the form of natural language sentences. However, existing text summarization models including neural networks can hardly generalize to user review summarization due to the lack of labeled data. In this research, we explore topic models and neural network models for product defect discovery and summarization from user reviews. Firstly, a generative Probabilistic Defect Model (PDM) is proposed, which models the generation process of user reviews from key defect entities including product Model, Component, Symptom, and Incident Date. Using the joint topics in these aspects, which are produced by PDM, people can discover defects which are represented by those entities. Secondly, we devise a Product Defect Latent Dirichlet Allocation (PDLDA) model, which describes how negative reviews are generated from defect elements like Component, Symptom, and Resolution. The interdependency between these entities is modeled by PDLDA as well. PDLDA answers not only what the defects look like, but also how to address them using the crowd wisdom hidden in user reviews. Finally, the problem of how to summarize user reviews more accurately, and better capture the semantics in them, is studied using deep neural networks, especially Hierarchical Encoder-Decoder Models. For each of the research topics, comprehensive evaluations are conducted to justify the effectiveness and accuracy of the proposed models, on heterogeneous datasets. Further, on the theoretical side, this research contributes to the research stream on product defect discovery, opinion mining, probabilistic graphical models, and deep neural network models. Regarding impact, these techniques will benefit related users such as customers, manufacturers, and government officials. / Ph. D. / Product defects concern various groups of people, such as customers, manufacturers, and government officials. Thus, defect-related knowledge and information are essential. In keeping with the growth of social media, online forums, and Internet commerce, people post a vast amount of feedback on products, which forms a good source for the automatic acquisition of knowledge about defects. However, considering the vast volume of online reviews, how to automatically identify critical product defects and summarize the related information from the huge number of user reviews is challenging, even when we target only the negative reviews. People expect to see defect information in multiple facets, such as product model, component, and symptom, which are necessary to understand the defects and quantify their influence. In addition, people are eager to seek problem resolutions once they spot defects. Furthermore, users also want to better summarize product defects more accurately in the form of natural language sentences. These requirements cannot be satisfied by existing methods, which seldom consider the defect entities mentioned above, or hardly generalize to user review summarization. In this research, we develop novel Machine Learning (ML) algorithms for product defect discovery and summarization. Firstly, we study how to identify product defects and their related attributes, such as Product Model, Component, Symptom, and Incident Date. Secondly, we devise a novel algorithm, which can discover product defects and the related Component, Symptom, and Resolution, from online user reviews. This method tells not only what the defects look like, but also how to address them using the crowd wisdom hidden in user reviews. Finally, we address the problem of how to summarize user reviews in the form of natural language sentences using a paraphrase-style method. On the theoretical side, this research contributes to multiple research areas in Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning. Regarding impact, these techniques will benefit related users such as customers, manufacturers, and government officials.

Page generated in 0.0704 seconds