• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 386
  • 176
  • 42
  • 26
  • 26
  • 24
  • 20
  • 20
  • 12
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 915
  • 212
  • 144
  • 140
  • 129
  • 103
  • 97
  • 84
  • 81
  • 81
  • 71
  • 70
  • 69
  • 67
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

LISZT’S SONATA IN B MINOR: ANALYTICAL AND HERMENEUTIC INQUIRIES

Kim, Yumi January 2019 (has links)
This dissertation pursues the following three objectives: (1) a comprehensive formal analysis of Franz Liszt’s sonata in B minor that combines adequate analytical models; (2) various hermeneutic approaches to the sonata that trace thematic and expressive transformations and deviational elements from sonata conventions; and (3) a comprehensive interpretation of the sonata analysis and the hermeneutic analyses based on historical, religious, and political contexts around Liszt and the sonata, which will differentiate this dissertation from preceding research on the sonata and other sonata genres in the nineteenth century. Chapter 1 begins with a literature review, focusing on (1) the sonata’s formal analyses and (2) programmatic approaches. Considering the formal boundaries of the sonata are still disputable in former research, I argue that the sonata strongly demands appropriate analytical methodologies in order to uncover its exceptional form. These methodologies include James Hepokoski’s and Waren Darcy’s sonata theory (2006), Leonard Meyer’s “secondary parameters” (1989), and Peter Smith’s “dimensional counterpoint” (2005), which will be discussed in Chapter 2. Hepokoski’s and Darcy’s sonata theory reveals the conventions of eighteenth- and nineteenth-century sonatas and suggests hermeneutic interpretations that result from deviations from these conventions. Secondary parameters and dimensional counterpoint are critical in shaping musical processes and form. I provide a three-step sonata analysis based on the analytical ground that combines these methodologies. My analysis offers a comprehensive view of the entire sonata as a one-movement sonata form, including structural, motivic, and narrative analyses. Chapter 1 also describes several programmatic approaches that Liszt scholars have developed. However, an obsession with the good/evil dichotomy in former research narrows narratives of the sonata. Considering that the sonata presents five motto themes, interpretations of the sonata may be more extensive and complex than previous research has found. I develop various and distinctive hermeneutic readings of the sonata in Chapter 3, which includes the following three sub-sections: 1) Topical approach; 2) Narrative approach; and 3) Lacanian approach. The topical approach investigates how different topical significations are manifested in Liszt’s sonata in nineteenth-century historical and cultural contexts. The narrative approach concentrates on five mottos presented in the sonata and their motivic and expressive transformations. The Lacanian approach concentrates on a lack of strong cadences in the sonata, relating to Lacan’s famous concept, objet petit a, an unattainable object of desire. Then, I continue to use the Lacanian viewpoint to interpret an unresolved fully-diminished seventh harmony as a Sinthome, a symptom that can never be healed. In Chapter 4, I relate Liszt’s religious convictions and various struggles to bring about another hermeneutic reading in the political, religious, and theological contexts around Liszt and the sonata, by revisiting my interpretations in Chapter 3. / Music Theory
462

Descriptive Labeling of Document Clusters / Deskriptiv märkning av dokumentkluster

Österberg, Adam January 2022 (has links)
Labeling is the process of giving a set of data a descriptive name. This thesis dealt with documents with no additional information and aimed at clustering them using topic modeling and labeling them using Wikipedia as a second source. Labeling documents is a new field with many potential solutions. This thesis examined one method in a practical setting. Unstructured data was preprocessed and clustered using a topic model. Frequent words from each cluster were used to generate a search query sent to Wikipedia, where titles and categories from the most relevant pages were stored as candidate labels. Each candidate label was evaluated based on the frequency of common cluster words among the candidate labels. The frequency was weighted proportional to the relevance of the original Wikipedia article. The relevance was based on the order of appearance in the search results. The five labels with the highest scores were chosen to describe the cluster. The clustered documents consisted of exam questions that students use to practice before a course exam. Each question in the cluster was scored by someone experienced in the relevant topic by evaluating if one of the five labels correctly described the content. The method proved unreliable, with only one course receiving labels considered descriptive for most of its questions. A significant problem was the closely related data with all documents belonging to one overarching category instead of a dataset containing independent topics. However, for one dataset, 80 % of the documents received a descriptive label, indicating that labeling using secondary sources has potential, but needs to be investigated further. / Märkning handlar om att ge okända data en beskrivning. I denna uppsats behandlas data i form av dokument som utan ytterligare information klustras med temamodellering samt märks med hjälp av Wikipedia som en sekundär källa. Märkning av dokument är ett nytt forskningsområde med flera tänkbara vägar framåt. I denna uppsats undersöks en möjlig metod i en praktisk miljö. Dokumenten förbehandlas och grupperas i kluster med hjälp av en temamodell. Vanliga ord från varje kluster används sedan för att generera en sökfråga som skickas till Wikipedia där titlar och kategorier från de mest relevanta sidorna lagras som kandidater. Varje kandidat utvärderas sedan baserat på frekvensen av kandidatordet bland titlarna i klustret och relevansen av den ursprungliga Wikipedia-artikeln. Relevansen av artiklarna baserades på i vilken ordning de dök upp i sökresultatet. De fem märkningarna med högst poäng valdes ut för att beskriva klustret. De klustrade dokumenten bestod av tentamensfrågor som studenter använder sig av för att träna inför ett prov. Varje fråga i klustret utvärderades av någon med erfarenhet av det i frågan behandlade ämnet. Utvärderingen baserades på om någon av de fem märkningarna ansågs beskriva innehållet. Metoden visade sig vara opålitlig med endast en kurs som erhöll märkningar som ansågs beskrivande för majoriteten av dess frågor. Ett stort problem var att data var nära relaterad med alla dokument tillhörande en övergripande kategori i stället för oberoende ämnen. För en datamängd fick dock 80 % av dokumenten en beskrivande etikett. Detta visar att märkning med hjälp av sekundära källor har potential, men behöver undersökas ytterligare.
463

Gauging Gun-Based Social Movements Frames: Identifying Frames through Topic Modeling and Assessing Public Engagement of Frames through Facebook Media Posts

Prasanna, Ram 07 1900 (has links)
The lack of success of the gun control movement and the success of the gun rights movement in the United States have prompted research into the root causes. Although the political infrastructure, organizational resources, and public interest prove to be important factors in a social movement's success, how each social movement frames their arguments is extremely important for proposing policy initiatives and garnering support. In order to understand how gun control and gun rights organizations frame their arguments this study does two things: (1) performs topic modeling on the six gun control organizations' and three gun rights organizations' press statements to see the frames that each social movement engages in, and (2) identifying these frames in the most popular gun control and gun rights organizations on Facebook to predict likes, comments, and shares. This study is able to identify the top frames in the gun control and gun rights social movements and see how followers of each of these movements engage with each of these frames on Facebook.
464

Multiset Model Selection and Averaging, and Interactive Storytelling

Maiti, Dipayan 23 August 2012 (has links)
The Multiset Sampler [Leman et al., 2009] has previously been deployed and developed for efficient sampling from complex stochastic processes. We extend the sampler and the surrounding theory to model selection problems. In such problems efficient exploration of the model space becomes a challenge since independent and ad-hoc proposals might not be able to jointly propose multiple parameter sets which correctly explain a new pro- posed model. In order to overcome this we propose a multiset on the model space to en- able efficient exploration of multiple model modes with almost no tuning. The Multiset Model Selection (MSMS) framework is based on independent priors for the parameters and model indicators on variables. We show that posterior model probabilities can be easily obtained from multiset averaged posterior model probabilities in MSMS. We also obtain typical Bayesian model averaged estimates for the parameters from MSMS. We apply our algorithm to linear regression where it allows easy moves between parame- ter modes of different models, and in probit regression where it allows jumps between widely varying model specific covariance structures in the latent space of a hierarchical model. The Storytelling algorithm [Kumar et al., 2006] constructs stories by discovering and con- necting latent connections between documents in a network. Such automated algorithms often do not agree with user's mental map of the data. Hence systems that incorporate feedback through visual interaction from the user are of immediate importance. We pro- pose a visual analytic framework in which such interactions are naturally incorporated in to the existing Storytelling algorithm through a redefinition of the latent topic space used in the similarity measure of the network. The document network can be explored us- ing the newly learned normalized topic weights for each document. Hence our algorithm augments the limitations of human sensemaking capabilities in large document networks by providing a collaborative framework between the underlying model and the user. Our formulation of the problem is a supervised topic modeling problem where the supervi- sion is based on relationships imposed by the user as a set of inequalities derived from tolerances on edge costs from inverse shortest path problem. We show a probabilistic modeling of the relationships based on auxiliary variables and propose a Gibbs sampling based strategy. We provide detailed results from a simulated data and the Atlantic Storm data set. / Ph. D.
465

The Impact of Varied Knowledge on Innovation and the Fate of Organizations

Asgari, Elham 02 August 2019 (has links)
In my dissertation, I examine varied types of knowledge and how they contribute to innovation generation and selection at both the firm and the industry level using the emerging industry context of small satellites. My research is divided into three papers. In Paper One, I take a supply-demand perspective and examine how suppliers of technology—with their unique knowledge of science and technology—and users of technology—with their unique knowledge of demand—contribute to innovation generation and selection over the industry lifecycle. Results show that the contributions of suppliers and users vary based on unique aspects of innovation, such as novelty, breadth, and coherence – and also over the industry life cycle. In Paper Two, I study how firms overcome science-business tension in their pursuit of novel innovation. I examine unique aspects of knowledge: scientists' business knowledge and CEOs' scientific knowledge. I show that CEOs' scientific knowledge is an important driver of firms' novel pursuits and that this impact is higher when scientists do not have business knowledge. In the third paper, I further examine how scientists with high technological and scientific knowledge—i.e., star scientists—impact firm innovation generation and selection. With a focus on explorative and exploitative innovation, I develop theory on the boundary conditions of stars' impact on firm level outcomes. I propose that individual level contingencies—i.e., stage of employment—and organizational level contingencies—explorative or exploitative innovation—both facilitate and hinder stars' impact on firms' innovative pursuits. / Doctor of Philosophy / In my dissertation, I study innovation at both the firm level and the industry level using the emerging industry context of small satellites. My dissertation divides into three papers. In Paper One, I study unique aspects of innovation at the industry level taking a supply-demand perspective. Since novelty, breadth, and convergence of innovation are all important drivers of the emergence and evolution of industries, I examine how supply side or demand side actors contribute to unique aspects of innovation over the industry life cycle. Results suggest that both suppliers and users of technology make important contributions to innovation, however, their respective contributions vary to novelty, breadth, and convergence of innovation. This impact varies over the industry life cycle. In Paper Two, I study how firms pursue novel innovation as main creator of economic value for firms. Firms need both scientific and technological knowledge in their pursuit of novel innovation. However, firms often struggle to overcome science-business tensions. Focusing on CEOs and scientists as two main drivers of innovation, I study how CEOs’ scientific knowledge and scientists’ business knowledge help firms overcome business-science tension. Results suggest that the likelihood of firms’ novel pursuit is higher when CEOs have scientific knowledge and scientists do not have business knowledge. In Paper Three, I further examine how high-performing scientists—i.e., star scientists—impact explorative and exploitative innovation. I propose that the stage of employment of individuals and goal context of firms are important contingencies that impact how stars impact firm level innovation.
466

Product Defect Discovery and Summarization from Online User Reviews

Zhang, Xuan 29 October 2018 (has links)
Product defects concern various groups of people, such as customers, manufacturers, government officials, etc. Thus, defect-related knowledge and information are essential. In keeping with the growth of social media, online forums, and Internet commerce, people post a vast amount of feedback on products, which forms a good source for the automatic acquisition of knowledge about defects. However, considering the vast volume of online reviews, how to automatically identify critical product defects and summarize the related information from the huge number of user reviews is challenging, even when we target only the negative reviews. As a kind of opinion mining research, existing defect discovery methods mainly focus on how to classify the type of product issues, which is not enough for users. People expect to see defect information in multiple facets, such as product model, component, and symptom, which are necessary to understand the defects and quantify their influence. In addition, people are eager to seek problem resolutions once they spot defects. These challenges cannot be solved by existing aspect-oriented opinion mining models, which seldom consider the defect entities mentioned above. Furthermore, users also want to better capture the semantics of review text, and to summarize product defects more accurately in the form of natural language sentences. However, existing text summarization models including neural networks can hardly generalize to user review summarization due to the lack of labeled data. In this research, we explore topic models and neural network models for product defect discovery and summarization from user reviews. Firstly, a generative Probabilistic Defect Model (PDM) is proposed, which models the generation process of user reviews from key defect entities including product Model, Component, Symptom, and Incident Date. Using the joint topics in these aspects, which are produced by PDM, people can discover defects which are represented by those entities. Secondly, we devise a Product Defect Latent Dirichlet Allocation (PDLDA) model, which describes how negative reviews are generated from defect elements like Component, Symptom, and Resolution. The interdependency between these entities is modeled by PDLDA as well. PDLDA answers not only what the defects look like, but also how to address them using the crowd wisdom hidden in user reviews. Finally, the problem of how to summarize user reviews more accurately, and better capture the semantics in them, is studied using deep neural networks, especially Hierarchical Encoder-Decoder Models. For each of the research topics, comprehensive evaluations are conducted to justify the effectiveness and accuracy of the proposed models, on heterogeneous datasets. Further, on the theoretical side, this research contributes to the research stream on product defect discovery, opinion mining, probabilistic graphical models, and deep neural network models. Regarding impact, these techniques will benefit related users such as customers, manufacturers, and government officials. / Ph. D. / Product defects concern various groups of people, such as customers, manufacturers, and government officials. Thus, defect-related knowledge and information are essential. In keeping with the growth of social media, online forums, and Internet commerce, people post a vast amount of feedback on products, which forms a good source for the automatic acquisition of knowledge about defects. However, considering the vast volume of online reviews, how to automatically identify critical product defects and summarize the related information from the huge number of user reviews is challenging, even when we target only the negative reviews. People expect to see defect information in multiple facets, such as product model, component, and symptom, which are necessary to understand the defects and quantify their influence. In addition, people are eager to seek problem resolutions once they spot defects. Furthermore, users also want to better summarize product defects more accurately in the form of natural language sentences. These requirements cannot be satisfied by existing methods, which seldom consider the defect entities mentioned above, or hardly generalize to user review summarization. In this research, we develop novel Machine Learning (ML) algorithms for product defect discovery and summarization. Firstly, we study how to identify product defects and their related attributes, such as Product Model, Component, Symptom, and Incident Date. Secondly, we devise a novel algorithm, which can discover product defects and the related Component, Symptom, and Resolution, from online user reviews. This method tells not only what the defects look like, but also how to address them using the crowd wisdom hidden in user reviews. Finally, we address the problem of how to summarize user reviews in the form of natural language sentences using a paraphrase-style method. On the theoretical side, this research contributes to multiple research areas in Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning. Regarding impact, these techniques will benefit related users such as customers, manufacturers, and government officials.
467

Identifying Job Categories and Required Competencies for Instructional Technologist: A Text Mining and Content Analysis

Chen, Le 06 July 2020 (has links)
This study applied both human-based and computer-based techniques to conduct a job analysis in the field of instructional technology. The primary research focus of the job analysis was to examine the efficacy of text mining by comparing text mining results with content analysis results. This agenda was fulfilled by using job announcement data as an example to determine essential job categories and required competencies. In phase one, a job title analysis was conducted. Different categorizing strategies were explored, and primary job categories were reported. In phase two, the human-based content analysis was conducted, which identified 20 competencies in the knowledge domain, 22 in the ability domain, 23 in the skill domain, and 13 other competencies. In phase three, text mining (topic modeling) was applied to the entire data set, resulting in 50 themes. From these 50 themes, the researcher selected 20 themes that were most relevant to instructional technology competencies. The findings of the two research techniques differ in terms of granularity, comprehensibility, and objectivity. Based on evidence revealed in the current study, the author recommends that future studies explore ways to combine the two techniques to complement one another. / Doctor of Philosophy / According to Kimmons and Veletsianos (2018), text mining has not been widely applied in the field of instructional technology. This study provides an example of using text mining techniques to discover a set of required job competencies. It can be helpful to researchers unfamiliar with text mining methodology, allowing them to understand its potentials and limitations better. The primary research focus was to examine the efficacy of text mining by comparing text mining results with content analysis results. Both content analysis and text mining procedures were applied to the same data set to extract job competencies. Similarities and differences between the results were compared, and the pros and cons of each methodology were discussed.
468

Product Design For Repairability: Identifying Failure Modes With Topic Modeling And Designing For Electronic Waste Reuse

Franz, Claire J 01 June 2024 (has links) (PDF)
Design for repairability is imperative to making products that last long enough to justify the resources they consume and the pollution they generate. While design for repairability has been gaining steady momentum, especially with recent advances in Right to Repair legislation, there is still work to be done. There are gaps in both the tools available for repair-conscious designers and the products coming onto store shelves. This thesis work aims to help set sails in the right direction on both fronts. This research explores the use of topic modeling (a natural language processing technique) to extract repairability design insights from online customer feedback. This could help repair-conscious designers identify areas for redesign to improve product repairability and/or prioritize components to provide as available replacement parts. Additionally, designers could apply this methodology early in their design process by examining the failure modes of similar existing products. Non-Negative Matrix Factorization (NMF) and BERTopic approaches are used to analyze 5,000 Amazon reviews for standalone computer keyboards to assess device failure modes. The proposed method identifies several failure modes for keyboards, including keys sticking, legs breaking, keyboards disconnecting, keyboard bases wobbling, and errors while typing. An accelerated product design process for a keyboard is presented to showcase an application of the topic modeling results, as well as to demonstrate the potential for product design that uses a “piggybacking” design strategy to reuse electronic components. This work indicates that topic modeling is a promising approach for obtaining repairability-related design leads and demonstrates the efficacy of product design to reduce e-waste.
469

TWO ESSAYS ON SERVICE ROBOTS AND THEIR EFFECTS ON HOTEL CUSTOMER EXPERIENCE

Hu, Xingbao (Simon) January 2020 (has links)
Artificial intelligence (AI) and robotics are revolutionizing the traditional paradigm of business operations and transforming consumers’ experiences by promoting human–robot interaction in tourism and hospitality. Nonetheless, research related to customers’ experiences with robot-related services in this industry remains scant. This study thus seeks to investigate hotel customers’ experiences with service robots and how robot-based experiences shape customers’ satisfaction with hotel stays. Specifically, three research questions are addressed: (a) What are hotel customers’ primary concerns about robots and robot-related services? (b) Do hotel customers’ experiences with robotic services shape guests’ overall satisfaction? (c) How do service robots’ attributes affect guests’ forgiveness of robots’ service failure? This dissertation consists of three chapters. Chapter 1 introduces the overall research background. Chapter 2 answers the first two research questions by combining text mining and regression analyses; Chapter 3 addresses the third question by introducing social cognition into this investigation and performing an experiment. Overall, sentiment analyses uncovered customers’ generally positive experiences with robot services. Machine learning via latent Dirichlet allocation modeling revealed three key topics underlying hotel guests’ robot-related reviews—robots’ room delivery services, entertainment and catering services, and front office services. Regression analyses demonstrated that hotel robots’ attributes (e.g., mechanical vs. AI-assistant robots) and robot reviews’ characteristics (e.g., sentiment scores) can influence customers’ overall satisfaction with hotels. Finally, the experimental study verified uncanny valley theory and the existence of social cognition related to service robots (i.e., warmth and competence) by pointing out the interactive effects of robots’ anthropomorphism in terms of their facial expressions, voices, and physical appearance. These findings collectively yield a set of theoretical implications for researchers along with practical implications for hotels and robot developers. / Tourism and Sport
470

Topic Model-based Mass Spectrometric Data Analysis in Cancer Biomarker Discovery Studies

Wang, Minkun 14 June 2017 (has links)
Identification of disease-related alterations in molecular and cellular mechanisms may reveal useful biomarkers for human diseases including cancers. High-throughput omic technologies for identifying and quantifying multi-level biological molecules (e.g., proteins, glycans, and metabolites) have facilitated the advances in biological research in recent years. Liquid (or gas) chromatography coupled with mass spectrometry (LC/GC-MS) has become an essential tool in such large-scale omic studies. Appropriate LC/GC-MS data preprocessing pipelines are needed to detect true differences between biological groups. Challenges exist in several aspects of MS data analysis. Specifically for biomarker discovery, one fundamental challenge in quantitation of biomolecules is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based omic studies. Purification of mass spectometric data is highly desired prior to subsequent differential analysis. In this research dissertation, we majorly target at addressing the purification problem through probabilistic modeling. We propose an intensity-level purification model (IPM) to computationally purify LC/GC-MS based cancerous data in biomarker discovery studies. We further extend IPM to scan-level purification model (SPM) by considering information from extracted ion chromatogram (EIC, scan-level feature). Both IPM and SPM belong to the category of topic modeling approach, which aims to identify the underlying "topics" (sources) and their mixture proportions in composing the heterogeneous data. Additionally, denoise deconvolution model (DMM) is proposed to capture the noise signals in samples based on purified profiles. Variational expectation-maximization (VEM) and Markov chain Monte Carlo (MCMC) methods are used to draw inference on the latent variables and estimate the model parameters. Before we come to purification, other research topics in related to mass spectrometric data analysis for cancer biomarker discovery are also investigated in this dissertation. Chapter 3 discusses the developed methods in the differential analysis of LC/GC-MS based omic data, specifically for the preprocessing in data of LC-MS profiled glycans. Chapter 4 presents the assumptions and inference details of IPM, SPM, and DDM. A latent Dirichlet allocation (LDA) core is used to model the heterogeneous cancerous data as mixtures of topics consisting of sample-specific pure cancerous source and non-cancerous contaminants. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum and tissue proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis. Chapter 5 elaborates these applications in cancer biomarker discovery, where typical single omic and integrative analysis of multi-omic studies are included. / Ph. D. / This dissertation documents the methodology and outputs for computational deconvolution of heterogeneous omics data generated from biospecimens of interest. These omics data convey qualitative and quantitative information of biomolecules (e.g., glycans, proteins, metabolites, etc.) which are profiled by instruments named liquid (or gas) chromatography and mass spectrometer (LC/GC-MS). In the scenarios of biomarker discovery, we aim to find out the significant difference on intensities of biomolecules with respect to two specific phenotype groups so that the biomarkers can be used as clinical indicators for early stage diagnose. However, the purity of collected samples constitutes the fundamental challenge to the process of differential analysis. Instead of experimental methods that are costly and time-consuming, we treat the purification task as one of the topic modeling procedures, where we assume each observed biomolecular profile is a mixture of hidden pure source together with unwanted contaminants. The developed models output the estimated mixture proportion as well as the underlying “topics”. With different level’s purification applied, improved discrimination power of candidate biomarkers and more biologically meaningful pathways were discovered in LC/GC-MS based multi-omic studies for liver cancer. This research work originates from a broader scope of probabilistic generative modeling, where rational assumptions are made to characterize the generation process of the observations. Therefore, the developed models in this dissertation have great potential in applications other than heterogeneous data purification discussed in this dissertation. A good example is to uncover the relationship of human gut microbiome with the host’s phenotypes of interest (e.g., disease like type-II diabetes). Similar challenges exist in how to infer the underlying intestinal flora distribution and estimate their mixture proportions. This dissertation also covers topics of related data preprocessing and integration, but with a consistent goal in improving the performance of biomarker discovery. In summary, the research help address sample heterogeneity issue observed in LC/GC-MS based cancer biomarker discovery studies and shed light on computational deconvolution of the mixtures, which can be generalized to other domains of interest.

Page generated in 0.0582 seconds