Return to search

Human Learning-Augmented Machine Learning Frameworks for Text Analytics

Artificial intelligence (AI) has made astonishing breakthroughs in recent years and achieved comparable or even better performance compared to humans on many real-world tasks and applications. However, it is still far from reaching human-level intelligence in many ways. Specifically, although AI may take inspiration from neuroscience and cognitive psychology, it is dramatically different from humans in both what it learns and how it learns. Given that current AI cannot learn as effectively and efficiently as humans do, a natural solution is analyzing human learning processes and projecting them into AI design. This dissertation presents three studies that examined cognitive theories and established frameworks to integrate crucial human cognitive learning elements into AI algorithms to build human learning–augmented AI in the context of text analytics.

The first study examined compositionality—how information is decomposed into small pieces, which are then recomposed to generate larger pieces of information. Compositionality is considered as a fundamental cognitive process, and also one of the best explanations for humans' quick learning abilities. Thus, integrating compositionality, which AI has not yet mastered, could potentially improve its learning performance. By focusing on text analytics, we first examined three levels of compositionality that can be captured in language. We then adopted design science paradigms to integrate these three types of compositionality into a deep learning model to build a unified learning framework. Lastly, we extensively evaluated the design on a series of text analytics tasks and confirmed its superiority in improving AI's learning effectiveness and efficiency.

The second study focused on transfer learning, a core process in human learning. People can efficiently and effectively use knowledge learned previously to solve new problems. Although transfer learning has been extensively studied in AI research and is often a standard procedure in building machine learning models, existing techniques are not able to transfer knowledge as effectively and efficiently as humans. To solve this problem, we first drew on the theory of transfer learning to analyze the human transfer learning process and identify the key elements that elude AI. Then, following the design science paradigm, a novel transfer learning framework was proposed to explicitly capture these cognitive elements. Finally, we assessed the design artifact's capability to improve transfer learning performance and validated that our proposed framework outperforms state-of-the-art approaches on a broad set of text analytics tasks.

The two studies above researched knowledge composition and knowledge transfer, while the third study directly addressed knowledge itself by focusing on knowledge structure, retrieval, and utilization processes. We identified that despite the great progress achieved by current knowledge-aware AI algorithms, they are not dealing with complex knowledge in a way that is consistent with how humans manage knowledge. Grounded in schema theory, we proposed a new design framework to enable AI-based text analytics algorithms to retrieve and utilize knowledge in a more human-like way. We confirmed that our framework outperformed current knowledge-based algorithms by large margins with strong robustness. In addition, we evaluated more intricately the efficacy of each of the key design elements. / Doctor of Philosophy / This dissertation presents three studies that examined cognitive theories and established frameworks to integrate crucial human cognitive learning elements into artificial intelligence (AI) algorithm designs to build human learning–augmented AI in the context of text analytics. The first study examined compositionality—how information is decomposed into small pieces, which are then recomposed to generate larger pieces of information. Design science research methodology has been adopted to propose a novel deep learning–based framework that can incorporate three levels of compositionality in language with significantly improved learning performance on a series of text analytics tasks. The second study went beyond that basic element and focused on transfer learning—how humans can efficiently and effectively use knowledge learned previously to solve new problems. Our novel transfer learning framework, which is grounded in the theory of transfer learning, has been validated on a broad set of text analytics tasks with improved learning effectiveness and efficiency. Finally, the third study directly addressed knowledge itself by focusing on knowledge structure, retrieval, and utilization processes. We drew on schema theory and proposed a new design framework to enable AI-based text analytics algorithms to retrieve and utilize knowledge in a more human-like way. Lastly, we confirmed our design's superiority in dealing with knowledge on several common text analytics tasks compared to existing knowledge-based algorithms.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/106567
Date18 May 2020
CreatorsXia, Long
ContributorsManagement, Wang, Alan Gang, Fan, Weiguo, Seref, Onur, Shen, Wenqi, Abrahams, Alan Samuel
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeDissertation
FormatETD, application/pdf, application/pdf, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0024 seconds