• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 386
  • 176
  • 42
  • 26
  • 26
  • 24
  • 20
  • 20
  • 12
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 915
  • 212
  • 144
  • 140
  • 129
  • 103
  • 97
  • 84
  • 81
  • 81
  • 71
  • 70
  • 69
  • 67
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Enhancing User Search Experience in Digital Libraries with Rotated Latent Semantic Indexing

Polyakov, Serhiy 08 1900 (has links)
This study investigates a semi-automatic method for creation of topical labels representing the topical concepts in information objects. The method is called rotated latent semantic indexing (rLSI). rLSI has found application in text mining but has not been used for topical labels generation in digital libraries (DLs). The present study proposes a theoretical model and an evaluation framework which are based on the LSA theory of meaning and investigates rLSI in a DL environment. The proposed evaluation framework for rLSI topical labels is focused on human-information search behavior and satisfaction measures. The experimental systems that utilize those topical labels were built for the purposes of evaluating user satisfaction with the search process. A new instrument was developed for this study and the experiment showed high reliability of the measurement scales and confirmed the construct validity. Data was collected through the information search tasks performed by 122 participants using two experimental systems. A quantitative method of analysis, partial least squares structural equation modeling (PLS-SEM), was used to test a set of research hypotheses and to answer research questions. The results showed a not significant, indirect effect of topical label type on both guidance and satisfaction. The conclusion of the study is that topical labels generated using rLSI provide the same levels of alignment, guidance, and satisfaction with the search process as topical labels created by the professional indexers using best practices.
582

La finitude et le temps en mandarin

Chu, LongJing 12 1900 (has links)
Étant donné que le mandarin est une langue dépourvue de morphologie verbale, il est difficile de démontrer l’existence du nœud T et de l’opposition entre la finitude et la non-finitude. Dans ce mémoire, nous analysons cette difficulté sous l’angle de l’interface sémantique-syntaxique. Suivant Klein (1998,2000), la finitude est porteuse de deux éléments sémantiques : le temps topique (TT) et l’assertion (AST). En mandarin, les particules d’aspect encodent le TT et l’AST. Les phrases déclaratives comprenant une particule d’aspect sont finies et les phrases déclaratives sans particule d’aspect sont non finies. En nous basant sur la structuration de la périphérie gauche (Rizzi 1997) révisée par Paul (2015) pour le mandarin, nous démontrons que les complétives déclaratives finies projettent jusqu’à TopicP, alors que les complétives déclaratives non finies projettent au TP. Le temps et la finitude ne sont pas fusionnés sous la même projection en mandarin. De plus, la finitude et la non-finitude s'analysent mieux en termes de distinction structurelle dans le cas du mandarin. / Since Mandarin is a language without verbal morphology, it is difficult to demonstrate the existence of the T-node and the opposition between finiteness and non-finiteness. In this thesis, we analyze this difficulty from the perspective of the semantic-syntactic interface. According to Klein (1998, 2000), finiteness is the carrier of two semantic elements: topic time (TT) and assertion (AST). In Mandarin, aspect particles encode TT and AST. Declarative sentences containing an aspect particle are finite and declarative sentences without an aspect particle are non-finite. Based on the left periphery structuring (Rizzi 1997) revised by Paul (2015) for Mandarin, we will demonstrate that finite declarative complements project to TopicP, while non-finite declarative complements project to TP. Time and finiteness are not assimilated under the same projection in Mandarin. Moreover, finiteness and non-finiteness are better interpreted via a structural distinction in the case of Mandarin.
583

Generating Thematic Maps from Hyperspectral Imagery Using a Bag-of-Materials Model

Park, Kyoung Jin 25 July 2013 (has links)
No description available.
584

Semantic Overflow of Powerful Feelings: Digital Humanities Approaches and the 1805 and 1850 Versions of Wordsworth's Prelude

Hansen, Dylan 25 April 2023 (has links) (PDF)
Scholars have repeatedly contrasted the 1805 and 1850 versions of William Wordsworth’s The Prelude since the discovery and publication of the former by Ernest De Sélincourt in 1926. Points of contention have included the 1850 poem’s grammatical revisions and shifts toward greater political and religious orthodoxy. While these discussions have waned in recent decades, digital humanities tools allow us to revisit oft-debated texts through new lenses. Wanting to examine scholarly claims about The Prelude from a digital humanities perspective, I collaborated with Dr. Billy Hall to enter both versions of the poem into a data analysis and visualization tool, which displayed the results in topic-modeling outputs and most-frequent-words lists. The 1805 and 1850 topic modeling outputs were essentially identical to one another, suggesting either that scholars have overstated differences between the versions or that the themes of the poem may have evolved in ways not easily captured by my digital humanities methods. On the other hand, the most-frequent-words lists revealed some notable discrepancies between the two Preludes. One set of lists included articles, conjunctions, pronouns, and linking verbs (otherwise known as “stop words”), demonstrating, for instance, that the word “was” appeared with significantly less frequency in the 1850 Prelude. I found that other linking verbs also decreased in the 1850 Prelude, and this discovery prompted me to conduct a stylistic analysis of said verbs. Knowing that a raw statistical count of linking verbs in both texts would reveal only an incomplete portrait of Wordsworth’s shifting verb usage, I divided the verb revisions into two primary categories: replacements of linking verbs with dynamic verbs and descriptors, and removals of lines containing linking verbs. While scholars have previously highlighted the replacement of linking verbs with dynamic verbs and descriptors in the 1850 Prelude, these revisions only account for 30% of the 1850 linking verb revisions. In fact, the majority of linking verb revisions consist of removed 1805 lines. Many of these lines are declarative statements—the removal of which suggests that Wordsworth preferred, in some cases, a less prescriptive approach in the 1850 Prelude.
585

Aligning Instructional Practices with Content Standards in Junior Secondary Schools in Indonesia

Suwarno, Rumtini 30 March 2011 (has links) (PDF)
This study examined the degree of alignment between instructional practices and national curriculum standards, which may vary as a function of teacher characteristics. Using self-reports from teachers about their experiences teaching the national curriculum standards, the study explored three aspects of the alignments: (1) topic coverage, (2) level of difficulty for teachers to teach, and (3) level of difficulty for students to learn. While topic coverage is determined by the percentage of the national curriculum standards topics taught during the year of 2008-2009, the level of teacher difficulty to teach and the level of student difficulty to learn are assessed using a scale from 1 (very easy) to 4 (very difficult). I used mixed multilevel regression analyses to examine the relationships between alignments and teacher characteristics. The study involved 501 junior secondary school teachers from three western provinces in Indonesia (Lampung, Jakarta, and East Java) who teach the following nationally-assessed subjects: Indonesian, English, science, and mathematics. The findings showed that the majority of teachers taught 100% of the topics that were outlined in the national curriculum standards. Teachers generally found the topics easy to teach; however, students had some difficulty understanding the topics. In terms of the relationships of alignments with teacher characteristics, the findings suggested that these relationships varied. Theoretically, this research provides two contributions. First, lacking research in the area of curriculum standards and classroom instruction as mediator of student competencies, the findings of this study make an important contribution to the current research of the standards-based education system. Second, predicting alignments as a function of teacher characteristics in this study contributes to the theoretical discussion of teacher characteristics. As practical implications, the low level of the students' understanding required by the national standards is a problem that requires great concern from the government at all levels. Regarding topics, there is an urgent need to identify the specific topics that teachers think are difficult for the students to understand.
586

Bayesian Test Analytics for Document Collections

Walker, Daniel David 15 November 2012 (has links) (PDF)
Modern document collections are too large to annotate and curate manually. As increasingly large amounts of data become available, historians, librarians and other scholars increasingly need to rely on automated systems to efficiently and accurately analyze the contents of their collections and to find new and interesting patterns therein. Modern techniques in Bayesian text analytics are becoming wide spread and have the potential to revolutionize the way that research is conducted. Much work has been done in the document modeling community towards this end,though most of it is focused on modern, relatively clean text data. We present research for improved modeling of document collections that may contain textual noise or that may include real-valued metadata associated with the documents. This class of documents includes many historical document collections. Indeed, our specific motivation for this work is to help improve the modeling of historical documents, which are often noisy and/or have historical context represented by metadata. Many historical documents are digitized by means of Optical Character Recognition(OCR) from document images of old and degraded original documents. Historical documents also often include associated metadata, such as timestamps,which can be incorporated in an analysis of their topical content. Many techniques, such as topic models, have been developed to automatically discover patterns of meaning in large collections of text. While these methods are useful, they can break down in the presence of OCR errors. We show the extent to which this performance breakdown occurs. The specific types of analyses covered in this dissertation are document clustering, feature selection, unsupervised and supervised topic modeling for documents with and without OCR errors and a new supervised topic model that uses Bayesian nonparametrics to improve the modeling of document metadata. We present results in each of these areas, with an emphasis on studying the effects of noise on the performance of the algorithms and on modeling the metadata associated with the documents. In this research we effectively: improve the state of the art in both document clustering and topic modeling; introduce a useful synthetic dataset for historical document researchers; and present analyses that empirically show how existing algorithms break down in the presence of OCR errors.
587

Anemone: a Visual Semantic Graph

Ficapal Vila, Joan January 2019 (has links)
Semantic graphs have been used for optimizing various natural language processing tasks as well as augmenting search and information retrieval tasks. In most cases these semantic graphs have been constructed through supervised machine learning methodologies that depend on manually curated ontologies such as Wikipedia or similar. In this thesis, which consists of two parts, we explore in the first part the possibility to automatically populate a semantic graph from an ad hoc data set of 50 000 newspaper articles in a completely unsupervised manner. The utility of the visual representation of the resulting graph is tested on 14 human subjects performing basic information retrieval tasks on a subset of the articles. Our study shows that, for entity finding and document similarity our feature engineering is viable and the visual map produced by our artifact is visually useful. In the second part, we explore the possibility to identify entity relationships in an unsupervised fashion by employing abstractive deep learning methods for sentence reformulation. The reformulated sentence structures are qualitatively assessed with respect to grammatical correctness and meaningfulness as perceived by 14 test subjects. We negatively evaluate the outcomes of this second part as they have not been good enough to acquire any definitive conclusion but have instead opened new doors to explore. / Semantiska grafer har använts för att optimera olika processer för naturlig språkbehandling samt för att förbättra sökoch informationsinhämtningsuppgifter. I de flesta fall har sådana semantiska grafer konstruerats genom övervakade maskininlärningsmetoder som förutsätter manuellt kurerade ontologier såsom Wikipedia eller liknande. I denna uppsats, som består av två delar, undersöker vi i första delen möjligheten att automatiskt generera en semantisk graf från ett ad hoc dataset bestående av 50 000 tidningsartiklar på ett helt oövervakat sätt. Användbarheten hos den visuella representationen av den resulterande grafen testas på 14 försökspersoner som utför grundläggande informationshämtningsuppgifter på en delmängd av artiklarna. Vår studie visar att vår funktionalitet är lönsam för att hitta och dokumentera likhet med varandra, och den visuella kartan som produceras av vår artefakt är visuellt användbar. I den andra delen utforskar vi möjligheten att identifiera entitetsrelationer på ett oövervakat sätt genom att använda abstraktiva djupa inlärningsmetoder för meningsomformulering. De omformulerade meningarna utvärderas kvalitativt med avseende på grammatisk korrekthet och meningsfullhet såsom detta uppfattas av 14 testpersoner. Vi utvärderar negativt resultaten av denna andra del, eftersom de inte har varit tillräckligt bra för att få någon definitiv slutsats, men har istället öppnat nya dörrar för att utforska.
588

Topic classification of Monetary Policy Minutes from the Swedish Central Bank / Ämnesklassificering av Riksbankens penningpolitiska mötesprotokoll

Cedervall, Andreas, Jansson, Daniel January 2018 (has links)
Over the last couple of years, Machine Learning has seen a very high increase in usage. Many previously manual tasks are becoming automated and it stands to reason that this development will continue in an incredible pace. This paper builds on the work in Topic Classification and attempts to provide a baseline on how to analyse the Swedish Central Bank Minutes and gather information using both Latent Dirichlet Allocation and a simple Neural Networks. Topic Classification is done on Monetary Policy Minutes from 2004 to 2018 to find how the distributions of topics change over time. The results are compared to empirical evidence that would confirm trends. Finally a business perspective of the work is analysed to reveal what the benefits of implementing this type of technique could be. The results of these methods are compared and they differ. Specifically the Neural Network shows larger changes in topic distributions than the Latent Dirichlet Allocation. The neural network also proved to yield more trends that correlated with other observations such as the start of bond purchasing by the Swedish Central Bank. Thus, our results indicate that a Neural Network would perform better than the Latent Dirichlet Allocation when analyzing Swedish Monetary Policy Minutes. / Under de senaste åren har artificiell intelligens och maskininlärning fått mycket uppmärksamhet och växt otroligt. Tidigare manuella arbeten blir nu automatiserade och mycket tyder på att utvecklingen kommer att fortsätta i en hög takt. Detta arbete bygger vidare på arbeten inom topic modeling (ämnesklassifikation) och applicera detta i ett tidigare outforskat område, riksbanksprotokoll. Latent Dirichlet Allocation och Neural Network används för att undersöka huruvida fördelningen av diskussionspunkter (topics) förändras över tid. Slutligen presenteras en teoretisk diskussion av det potentiella affärsvärdet i att implementera en liknande metod. Resultaten för de olika modellerna uppvisar stora skillnader över tid. Medan Latent Dirichlet Allocation inte finner några större trender i diskussionspunkter visar Neural Network på större förändringar över tid. De senare stämmer dessutom väl överens med andra observationer såsom påbörjandet av obligationsköp. Därav indikerar resultaten att Neural Network är en mer lämplig metod för analys av riksbankens mötesprotokoll.
589

Authentic texts or adapted texts - That is the question! The use of authentic and adapted texts in the study of English in two Swedish upper secondary schools and a study of student and teacher attitudes towards these texts

Daskalos, Konstantinos, Jellum Ling, Jeppe January 2006 (has links)
AbstractDaskalos, Konstantinos & Jellum Ling, Jeppe (2005)Authentic texts or adapted texts – That is the question! The use of authentic and adapted texts in the study of English in two Swedish upper secondary schools and a study of student and teacher attitudes towards these texts. Skolutveckling och ledarskap, Lärarutbildningen 60 p, Malmö HögskolaThe aim of this paper is to find out which attitudes teachers and students have towards authentic and adapted texts used in the teaching of English in two Swedish grammar schools. Furthermore, the paper aims to demonstrate the importance of proper text selection in relation to student motivation.To achieve this, a survey was conducted with second year students in two different schools; on top of this, several interviews were conducted with students as well as an interview with a teacher. This was done to demonstrate the different attitudes towards the textbook and authentic texts and to illustrate the importance of choosing topics that students can relate to.The results show that students preferred to read authentic texts. These texts provided them with interesting topics. The teacher also preferred to use authentic texts and agreed that authentic texts usually created an active classroom, but pointed out that to substitute the textbook entirely with authentic material was unrealistic. Therefore, a combination of the two types of text would be preferable.
590

The Death of Mrs. Smith

Eason, Martin P. 01 September 2005 (has links)
No description available.

Page generated in 0.0335 seconds