• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 33
  • 14
  • 13
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Research article introductions in Thai genre analysis of academic writing /

Jogthong, Chalermsri. January 2001 (has links)
Thesis (Ed. D.)--West Virginia University, 2001. / Title from document title page. Document formatted into pages; contains viii, 106 p. : ill. Includes abstract. Includes bibliographical references (p. 91-96).
12

Content analysis and summarization for video documents. / Content analysis & summarization for video documents

January 2005 (has links)
Lu, Shi. / Thesis submitted in: December 2004. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 100-109). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.vi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objectives --- p.1 / Chapter 1.2 --- Our Contributions --- p.3 / Chapter 1.3 --- Thesis Outline --- p.4 / Chapter 2 --- Related Work --- p.6 / Chapter 2.1 --- Static Video Summary --- p.6 / Chapter 2.2 --- Dynamic Video Skimming --- p.10 / Chapter 2.3 --- Summary --- p.14 / Chapter 3 --- Greedy Method Based Skim Generation --- p.16 / Chapter 3.1 --- Selected Video Features for Video Summarization --- p.17 / Chapter 3.2 --- Video Summarization Problem --- p.18 / Chapter 3.3 --- Experiments --- p.22 / Chapter 3.4 --- Summary --- p.25 / Chapter 4 --- Video Structure Analysis --- p.27 / Chapter 4.1 --- Video Shot Detection --- p.29 / Chapter 4.1.1 --- Shot Cut Detection --- p.30 / Chapter 4.1.2 --- Fade Detection --- p.35 / Chapter 4.2 --- Video Shot Group Construction --- p.38 / Chapter 4.2.1 --- Shot Pairwise Similarity Measure --- p.39 / Chapter 4.2.2 --- Video Shot Grouping by VToC --- p.41 / Chapter 4.2.3 --- Spectral Graph Partitioning --- p.42 / Chapter 4.3 --- Video Scene Detection --- p.46 / Chapter 4.4 --- Shot Arrangement Patterns --- p.48 / Chapter 4.5 --- Experiments --- p.50 / Chapter 4.6 --- Summary --- p.53 / Chapter 5 --- Graph Optimization-Based Video Summary Generation --- p.55 / Chapter 5.1 --- Video Scene Analysis --- p.56 / Chapter 5.1.1 --- Scene Content Entropy --- p.57 / Chapter 5.1.2 --- Target Skim Length Assignment --- p.58 / Chapter 5.2 --- Graph Modelling of Video Scenes --- p.59 / Chapter 5.2.1 --- Decompose the Video Scene into Candidate Video Strings --- p.60 / Chapter 5.2.2 --- The Spatial-Temporal Relation Graph --- p.61 / Chapter 5.2.3 --- The Optimal Skim Problem --- p.62 / Chapter 5.3 --- Graph Optimization --- p.64 / Chapter 5.4 --- Static Video Summary Generation --- p.65 / Chapter 5.5 --- Experiments --- p.68 / Chapter 5.6 --- Summary --- p.74 / Chapter 6 --- Video Content Annotation and Semantic Video Summarization --- p.75 / Chapter 6.1 --- Semantic Video Content Annotation --- p.77 / Chapter 6.1.1 --- Video Shot Segmentation --- p.77 / Chapter 6.1.2 --- Semi-Automatic Video Shot Annotation --- p.77 / Chapter 6.2 --- Video Structures and Semantics --- p.78 / Chapter 6.2.1 --- Video Structure Analysis --- p.78 / Chapter 6.2.2 --- Video Structure and Video Edit Process --- p.80 / Chapter 6.2.3 --- Mutual Reinforcement and Semantic Video Shot Group Detection --- p.81 / Chapter 6.3 --- Semantic Video Summarization --- p.84 / Chapter 6.3.1 --- Summarization Requests and Goals --- p.84 / Chapter 6.3.2 --- Determine the Sub-Skimming Length for Each Scene --- p.85 / Chapter 6.3.3 --- Extracting Video Shots by String Analysis --- p.86 / Chapter 6.4 --- Experiments --- p.88 / Chapter 6.5 --- Summary --- p.92 / Chapter 7 --- Concluding Remarks --- p.93 / Chapter 7.1 --- Summary --- p.93 / Chapter 7.2 --- Future Work --- p.95 / Chapter A --- Notations --- p.97 / Bibliography --- p.100
13

Feasibility of using citations as document summaries /

Hand, Jeff. January 2003 (has links)
Thesis (Ph. D.)--Drexel University, 2003. / Includes abstract and vita. Includes bibliographical references (leaves 129-147).
14

Multimodal News Summarization, Tracking and Annotation Incorporating Tensor Analysis of Memes

Tsai, Chun-Yu January 2017 (has links)
We demonstrate four novel multimodal methods for efficient video summarization and comprehensive cross-cultural news video understanding. First, For video quick browsing, we demonstrate a multimedia event recounting system. Based on nine people-oriented design principles, it summarizes YouTube-like videos into short visual segments (812sec) and textual words (less than 10 terms). In the 2013 Trecvid Multimedia Event Recounting competition, this system placed first in recognition time efficiency, while remaining above average in description accuracy. Secondly, we demonstrate the summarization of large amounts of online international news videos. In order to understand an international event such as Ebola virus, AirAsia Flight 8501 and Zika virus comprehensively, we present a novel and efficient constrained tensor factorization algorithm that first represents a video archive of multimedia news stories concerning a news event as a sparse tensor of order 4. The dimensions correspond to extracted visual memes, verbal tags, time periods, and cultures. The iterative algorithm approximately but accurately extracts coherent quad-clusters, each of which represents a significant summary of an important independent aspect of the news event. We give examples of quad-clusters extracted from tensors with at least 108 entries derived from international news coverage. We show the method is fast, can be tuned to give preferences to any subset of its four dimensions, and exceeds three existing methods in performance. Thirdly, noting that the co-occurrence of visual memes and tags in our summarization result is sparse, we show how to model cross-cultural visual meme influence based on normalized PageRank, which more accurately captures the rates at which visual memes are reposted in a specified time period in a specified culture. Lastly, we establish the correspondences of videos and text descriptions in different cultures by reliable visual cues, detect culture-specific tags for visual memes and then annotate videos in a cultural settings. Starting with any video with less text or no text in one culture (say, US), we select candidate annotations in the text of another culture (say, China) to annotate US video. Through analyzing the similarity of images annotated by those candidates, we can derive a set of proper tags from the viewpoints of another culture (China). We illustrate cultural-based annotation examples by segments of international news. We evaluate the generated tags by cross-cultural tag frequency, tag precision, and user studies.
15

Medical document management system using XML

Chan, Wai-man, January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2001. / Includes bibliographical references (leaves 105-107).
16

Automatic text summarization using lexical chains : algorithms and experiments

Kolla, Maheedhar, University of Lethbridge. Faculty of Arts and Science January 2004 (has links)
Summarization is a complex task that requires understanding of the document content to determine the importance of the text. Lexical cohesion is a method to identify connected portions of the text based on the relations between the words in the text. Lexical cohesive relations can be represented using lexical chaings. Lexical chains are sequences of semantically related words spread over the entire text. Lexical chains are used in variety of Natural Language Processing (NLP) and Information Retrieval (IR) applications. In current thesis, we propose a lexical chaining method that includes the glossary relations in the chaining process. These relations enable us to identify topically related concepts, for instance dormitory and student, and thereby enhances the identification of cohesive ties in the text. We then present methods that use the lexical chains to generate summaries by extracting sentences from the document(s). Headlines are generated by filtering the portions of the sentences extracted, which do not contribute towards the meaning of the sentence. Headlines generated can be used in real world application to skim through the document collections in a digital library. Multi-document summarization is gaining demand with the explosive growth of online news sources. It requires identification of the several themes present in the collection to attain good compression and avoid redundancy. In this thesis, we propose methods to group the portions of the texts of a document collection into meaningful clusters. clustering enable us to extract the various themes of the document collection. Sentences from clusters can then be extracted to generate a summary for the multi-document collection. Clusters can also be used to generate summaries with respect to a given query. We designed a system to compute lexical chains for the given text and use them to extract the salient portions of the document. Some specific tasks considered are: headline generation, multi-document summarization, and query-based summarization. Our experimental evaluation shows that efficient summaries can be extracted for the above tasks. / viii, 80 leaves : ill. ; 29 cm.
17

Automatic indexing and abstracting of document texts /

Moens, Marie-Francine. January 2000 (has links)
Univ., Diss.--Leuven, 1999. / Includes bibliographical references (p. [237] - 260) and index.
18

Image manipulation and user-supplied index terms.

Schultz, Leah 05 1900 (has links)
This study investigates the relationships between the use of a zoom tool, the terms they supply to describe the image, and the type of image being viewed. Participants were assigned to two groups, one with access to the tool and one without, and were asked to supply terms to describe forty images, divided into four categories: landscape, portrait, news, and cityscape. The terms provided by participants were categorized according to models proposed in earlier image studies. Findings of the study suggest that there was not a significant difference in the number of terms supplied in relation to access to the tool, but a large variety in use of the tool was demonstrated by the participants. The study shows that there are differences in the level of meaning of the terms supplied in some of the models. The type of image being viewed was related to the number of zooms and relationships between the type of image and the number of terms supplied as well as their level of meaning in the various models from previous studies exist. The results of this study provide further insight into how people think about images and how the manipulation of those images may affect the terms they assign to describe images. The inclusion of these tools in search and retrieval scenarios may affect the outcome of the process and the more collection managers know about how people interact with images will improve their ability to provide access to the growing amount of pictorial information.
19

ACTION: automatic classification for Chinese documents.

January 1994 (has links)
by Jacqueline, Wai-ting Wong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1994. / Includes bibliographical references (p. 107-109). / Abstract --- p.i / Acknowledgement --- p.iii / List of Tables --- p.viii / List of Figures --- p.ix / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Chinese Information Processing --- p.6 / Chapter 2.1 --- Chinese Word Segmentation --- p.7 / Chapter 2.1.1 --- Statistical Method --- p.8 / Chapter 2.1.2 --- Probabilistic Method --- p.9 / Chapter 2.1.3 --- Linguistic Method --- p.10 / Chapter 2.2 --- Automatic Indexing --- p.10 / Chapter 2.2.1 --- Title Indexing --- p.11 / Chapter 2.2.2 --- Free-Text Searching --- p.11 / Chapter 2.2.3 --- Citation Indexing --- p.12 / Chapter 2.3 --- Information Retrieval Systems --- p.13 / Chapter 2.3.1 --- Users' Assessment of IRS --- p.13 / Chapter 2.4 --- Concluding Remarks --- p.15 / Chapter 3 --- Survey on Classification --- p.16 / Chapter 3.1 --- Text Classification --- p.17 / Chapter 3.2 --- Survey on Classification Schemes --- p.18 / Chapter 3.2.1 --- Commonly Used Classification Systems --- p.18 / Chapter 3.2.2 --- Classification of Newspapers --- p.31 / Chapter 3.3 --- Concluding Remarks --- p.37 / Chapter 4 --- System Models and the ACTION Algorithm --- p.38 / Chapter 4.1 --- Factors Affecting Systems Performance --- p.38 / Chapter 4.1.1 --- Specificity --- p.39 / Chapter 4.1.2 --- Exhaustivity --- p.40 / Chapter 4.2 --- Assumptions and Scope --- p.42 / Chapter 4.2.1 --- Assumptions --- p.42 / Chapter 4.2.2 --- System Scope ´ؤ Data Flow Diagrams --- p.44 / Chapter 4.3 --- System Models --- p.48 / Chapter 4.3.1 --- Article --- p.48 / Chapter 4.3.2 --- Matching Table --- p.49 / Chapter 4.3.3 --- Forest --- p.51 / Chapter 4.3.4 --- Matching --- p.53 / Chapter 4.4 --- Classification Rules --- p.54 / Chapter 4.5 --- The ACTION Algorithm --- p.56 / Chapter 4.5.1 --- Algorithm Design Objectives --- p.56 / Chapter 4.5.2 --- Measuring Node Significance --- p.56 / Chapter 4.5.3 --- Pseudocodes --- p.61 / Chapter 4.6 --- Concluding Remarks --- p.64 / Chapter 5 --- Analysis of Results and Validation --- p.66 / Chapter 5.1 --- Seeking for Exhaustivity Rather Than Specificity --- p.67 / Chapter 5.1.1 --- The News Article --- p.67 / Chapter 5.1.2 --- The Matching Results --- p.68 / Chapter 5.1.3 --- The Keyword Values --- p.68 / Chapter 5.1.4 --- Analysis of Classification Results --- p.71 / Chapter 5.2 --- Catering for Hierarchical Relationships Between Classes and Subclasses --- p.72 / Chapter 5.2.1 --- The News Article --- p.72 / Chapter 5.2.2 --- The Matching Results --- p.73 / Chapter 5.2.3 --- The Keyword Values --- p.74 / Chapter 5.2.4 --- Analysis of Classification Results --- p.75 / Chapter 5.3 --- A Representative With Zero Occurrence --- p.78 / Chapter 5.3.1 --- The News Article --- p.78 / Chapter 5.3.2 --- The Matching Results --- p.79 / Chapter 5.3.3 --- The Keyword Values --- p.80 / Chapter 5.3.4 --- Analysis of Classification Results --- p.81 / Chapter 5.4 --- Statistical Analysis --- p.83 / Chapter 5.4.1 --- Classification Results with Highest Occurrence Frequency --- p.83 / Chapter 5.4.2 --- Classification Results with Zero Occurrence Frequency --- p.85 / Chapter 5.4.3 --- Distribution of Classification Results on Level Numbers --- p.86 / Chapter 5.5 --- Concluding Remarks --- p.87 / Chapter 5.5.1 --- Advantageous Characteristics of ACTION --- p.88 / Chapter 6 --- Conclusion --- p.93 / Chapter 6.1 --- Perspectives in Document Representation --- p.93 / Chapter 6.2 --- Classification Schemes --- p.95 / Chapter 6.3 --- Classification System Model --- p.95 / Chapter 6.4 --- The ACTION Algorithm --- p.96 / Chapter 6.5 --- Advantageous Characteristics of the ACTION Algorithm --- p.96 / Chapter 6.6 --- Testing and Validating the ACTION algorithm --- p.98 / Chapter 6.7 --- Future Work --- p.99 / Chapter 6.8 --- A Final Remark --- p.100 / Chapter A --- System Models --- p.102 / Chapter B --- Classification Rules --- p.104 / Chapter C --- Node Significance Definitions --- p.105 / References --- p.107
20

Machine learning, data mining, and the World Wide Web : design of special-purpose search engines

Kruger, Andries F 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2003. / ENGLISH ABSTRACT: We present DEADLINER, a special-purpose search engine that indexes conference and workshop announcements, and which extracts a range of academic information from the Web. SVMs provide an efficient and highly accurate mechanism for obtaining relevant web documents. DEADLINER currently extracts speakers, locations (e.g. countries), dates, paper submission (and other) deadlines, topics, program committees, abstracts, and affiliations. Complex and detailed searches are possible on these fields. The niche search engine was constructed by employing a methodology for rapid implementation of specialised search engines. Bayesian integration of simple extractors provides this methodology, that avoids complex hand-tuned text extraction methods. The simple extractors exploit loose formatting and keyword conventions. The Bayesian framework further produces a search engine where each user can control each fields false alarm rate in an intuitive and rigorous fashion, thus providing easy-to-use metadata. / AFRIKAANSE OPSOMMING: Ons stel DEADLINER bekend: 'n soekmasjien wat konferensie en werkvergaderingsaankondigings katalogiseer en wat uiteindelik 'n wye reeks akademiese byeenkomsmateriaal sal monitor en onttrek uit die Web. DEAD LINER herken en onttrek tans sprekers, plekke (bv. landname), datums, o.a. sperdatums vir die inlewering van akademiese verrigtings, onderwerpe, programkomiteë, oorsigte of opsommings, en affiliasies. 'n Grondige soek is moontlik oor en deur hierdie velde. Die nissoekmasjien is gebou deur gebruik te maak van 'n metodologie vir die vinnige oprigting van spesialiteitsoekmasjiene. Die metodologie vermy komplekse instelling m.b.v. hande-arbeid van die teksuittreksels deur gebruik te maak van Bayesiese integrering van eenvoudige ontsluiters. Die ontsluiters buit dan styl- en gewoonte-sleutelwoorde uit. Die Bayesiese raamwerk skep hierdeur 'n soekmasjien wat gebruikers toelaat om elke veld se kans om verkeerd te kies op 'n intuïtiewe en deeglike manier te beheer.

Page generated in 0.0669 seconds