• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 78
  • 78
  • 78
  • 78
  • 28
  • 25
  • 24
  • 21
  • 19
  • 14
  • 12
  • 11
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Text grammar and text processing: a cognitivist approach

Nyns, Roland January 1989 (has links)
Doctorat en philosophie et lettres / info:eu-repo/semantics/nonPublished
62

Bridging Text Mining and Bayesian Networks

Raghuram, Sandeep Mudabail 09 March 2011 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / After the initial network is constructed using expert’s knowledge of the domain, Bayesian networks need to be updated as and when new data is observed. Literature mining is a very important source of this new data. In this work, we explore what kind of data needs to be extracted with the view to update Bayesian Networks, existing technologies which can be useful in achieving some of the goals and what research is required to accomplish the remaining requirements. This thesis specifically deals with utilizing causal associations and experimental results which can be obtained from literature mining. However, these associations and numerical results cannot be directly integrated with the Bayesian network. The source of the literature and the perceived quality of research needs to be factored into the process of integration, just like a human, reading the literature, would. This thesis presents a general methodology for updating a Bayesian Network with the mined data. This methodology consists of solutions to some of the issues surrounding the task of integrating the causal associations with the Bayesian Network and demonstrates the idea with a semiautomated software system.
63

Editor Design in the Context of Control System Simulation

Fadden, Leon 01 January 1986 (has links) (PDF)
Advances in microcomputer display devices and support software during the past decade have made the microcomputer an increasingly popular vehicle for technical education. This is especially apparent in the area of simulation. The pedagog can provide the student of control theory not merely with the block diagrams and differential algebra but with high resolution color graphic animations supported by mathematical models whose parameters are easily changed by some editor facility. This mode of control system design and behavior study is both faster and more enjoyable for the student, providing greater continuity, concentration, and learning efficiency. This paper describes the simulation of a PID two-tank level control system. The System is at most fourth-order and provides a good introduction to system control theory. The model is not unusual, and its nonlinear fourth-order Runge-Kutta solution is straightforward. Simulation itself takes form as (1) a graphical animation in which the user is aware of changing water levels and pipe flows and (2) a numerical multi-column output of system inputs, state variables, and outputs. Both applications are very user-friendly. A special editor is developed, under which the above applications run. This paper is not a thorough treatment of the control system; a course is required here. Instead, the focus is on the editor and the organization of its Pascal source code. Discussed are a general editor concept and object-oriented code template to which any mathematical driver and associated simulators may be adapted. The editor’s source code is designed to be programmer-friendly so that the uninitiated programmer may rapidly assimilate editor structure and continue development.
64

An empirical study on the effects of a collaboration-aware computer system and several communication media alternatives on product quality and time to complete in a co-authoring environment

Green, Charles A. 12 January 2010 (has links)
A new type of software, termed a "group editor", allows multiple users to create and simultaneously edit a single document; this software has ostensibly been developed to increase efficiency in co-authoring environments where users may not be co-located. However, questions as to the effectiveness of this type of communication aid, which is a member of the "groupware" family of tools used for some types of computer supported cooperative work, remain. Particularly, there has been very little objective data on any group editor because of the problems inherent in evaluating writing, as well as due to the few examples of group editors that exist. A method was developed to examine the effect of using a particular group editor, Aspects™ from Group Technologies in Arlington, Va., in conjunction with several communication media, on a simple dyad writing task. Six dyads of college students familiar with journalistic writing were matched on attributes of dominance and writing ability and were asked to write short news articles based on short video clips in a balanced two factor within-subject analysis of variance design. Six conditions were tested based on communication media: audio only, audio plus video, and face-to-face; each of these with and without the availability of the group editor. Constraints inherent in the task attempted to enforce consistent document quality levels, measured by grammatical quality and content quality (correctness of information and chronological sequencing). Time to complete the articles was used as a measure of efficiency, independent from quality due to the consistent quality levels of the resulting work. Results from the time data indicated a significant effect of communication media, with the face-to-face conditions taking significantly less time to complete than either of the other media alternatives. Grammatical quality of the written articles was found to be of consistent high quality by way of computerized grammar checker. Content quality of the documents did not significantly differ for any of the conditions. A supplemental Latin square analysis showed additional significant differences in time to complete for trial means (a practice effect) and team differences. Further, significantly less variance was found in certain conditions which had the group editor than in other conditions which did not. Subjective data obtained from questionnaires supported these results and additional1y showed that subjects significantly preferred trials with the group editor and considered then more productive. The face-to-face conditions may have been more efficient due to the nature of the task or due to increased communication structure within dyads due to practice with the group editor. The significant effect of Team Differences may have been due to consistent style differences between dyads that affected efficiency levels. The decreased variability in time to complete in certain group editor conditions may have been due to increased communication structure in these conditions, or perhaps due to leveling effects of group writing as opposed to individual writing with team member aid. These hypotheses need to be tested with further study, and generalizability of the experimental task conditions and results from this particular group editor need to be established as well face-to-face conditions clearly resulted in the most efficient performance on this task. The results obtained concerning the group editor suggest possible efficiency or consistency benefits from the use of group editors by co-authoring persons when face-to-face communication is not practical. Perhaps group editors will become a useful method for surrogate travel for persons with disabilities. / Master of Science
65

Lossless reversible text transforms

Awan, Fauzia Salim 01 July 2001 (has links)
No description available.
66

Statistical modeling for lexical chains for automatic Chinese news story segmentation.

January 2010 (has links)
Chan, Shing Kai. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (leaves 106-114). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgements --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Problem Statement --- p.2 / Chapter 1.2 --- Motivation for Story Segmentation --- p.4 / Chapter 1.3 --- Terminologies --- p.5 / Chapter 1.4 --- Thesis Goals --- p.6 / Chapter 1.5 --- Thesis Organization --- p.8 / Chapter 2 --- Background Study --- p.9 / Chapter 2.1 --- Coherence-based Approaches --- p.10 / Chapter 2.1.1 --- Defining Coherence --- p.10 / Chapter 2.1.2 --- Lexical Chaining --- p.12 / Chapter 2.1.3 --- Cosine Similarity --- p.15 / Chapter 2.1.4 --- Language Modeling --- p.19 / Chapter 2.2 --- Feature-based Approaches --- p.21 / Chapter 2.2.1 --- Lexical Cues --- p.22 / Chapter 2.2.2 --- Audio Cues --- p.23 / Chapter 2.2.3 --- Video Cues --- p.24 / Chapter 2.3 --- Pros and Cons and Hybrid Approaches --- p.25 / Chapter 2.4 --- Chapter Summary --- p.27 / Chapter 3 --- Experimental Corpora --- p.29 / Chapter 3.1 --- The TDT2 and TDT3 Multi-language Text Corpus --- p.29 / Chapter 3.1.1 --- Introduction --- p.29 / Chapter 3.1.2 --- Program Particulars and Structures --- p.31 / Chapter 3.2 --- Data Preprocessing --- p.33 / Chapter 3.2.1 --- Challenges of Lexical Chain Formation on Chi- nese Text --- p.33 / Chapter 3.2.2 --- Word Segmentation for Word Units Extraction --- p.35 / Chapter 3.2.3 --- Part-of-speech Tagging for Candidate Words Ex- traction --- p.36 / Chapter 3.3 --- Chapter Summary --- p.37 / Chapter 4 --- Indication of Lexical Cohesiveness by Lexical Chains --- p.39 / Chapter 4.1 --- Lexical Chain as a Representation of Cohesiveness --- p.40 / Chapter 4.1.1 --- Choice of Word Relations for Lexical Chaining --- p.41 / Chapter 4.1.2 --- Lexical Chaining by Connecting Repeated Lexi- cal Elements --- p.43 / Chapter 4.2 --- Lexical Chain as an Indicator of Story Segments --- p.48 / Chapter 4.2.1 --- Indicators of Absence of Cohesiveness --- p.49 / Chapter 4.2.2 --- Indicator of Continuation of Cohesiveness --- p.58 / Chapter 4.3 --- Chapter Summary --- p.62 / Chapter 5 --- Indication of Story Boundaries by Lexical Chains --- p.63 / Chapter 5.1 --- Formal Definition of the Classification Procedures --- p.64 / Chapter 5.2 --- Theoretical Framework for Segmentation Based on Lex- ical Chaining --- p.65 / Chapter 5.2.1 --- Evaluation of Story Segmentation Accuracy --- p.65 / Chapter 5.2.2 --- Previous Approach of Story Segmentation Based on Lexical Chaining --- p.66 / Chapter 5.2.3 --- Statistical Framework for Story Segmentation based on Lexical Chaining --- p.69 / Chapter 5.2.4 --- Post Processing of Ratio for Boundary Identifi- cation --- p.73 / Chapter 5.3 --- Comparing Segmentation Models --- p.75 / Chapter 5.4 --- Chapter Summary --- p.79 / Chapter 6 --- Analysis of Lexical Chains Features as Boundary Indi- cators --- p.80 / Chapter 6.1 --- Error Analysis --- p.81 / Chapter 6.2 --- Window Length in the LRT Model --- p.82 / Chapter 6.3 --- The Relative Importance of Each Set of Features --- p.84 / Chapter 6.4 --- The Effect of Removing Timing Information --- p.92 / Chapter 6.5 --- Chapter Summary --- p.96 / Chapter 7 --- Conclusions and Future Work --- p.98 / Chapter 7.1 --- Contributions --- p.98 / Chapter 7.2 --- Future Works --- p.100 / Chapter 7.2.1 --- Further Extension of the Framework --- p.100 / Chapter 7.2.2 --- Wider Applications of the Framework --- p.105 / Bibliography --- p.106
67

Optimal erasure protection assignment for scalably compressed data over packet-based networks

Thie, Johnson, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2004 (has links)
This research is concerned with the reliable delivery of scalable compressed data over lossy communication channels. Recent works proposed several strategies for assigning optimal code redundancies to elements of scalable data, which form a linear structure of dependency, under the assumption that all source elements are encoded onto a common group of network packets. Given large data and small network packets, such schemes require very long channel codes with high computational complexity. In networks with high loss, small packets are more desirable than long packets. The first contribution of this thesis is to propose a strategy for optimally assigning elements of the scalable data to clusters of packets, subject to constraints on packet size and code complexity. Given a packet cluster arrangement, the scheme then assigns optimal code redundancies to the source elements, subject to a constraint on transmission length. Experimental results show that the proposed strategy can outperform the previous code assignment schemes subject to the above-mentioned constraints, particularly at high channel loss rates. Secondly, we modify these schemes to accommodate complex structures of dependency. Source elements are allocated to clusters of packets according to their dependency structure, subject to constraints on packet size and channel codeword length. Given a packet cluster arrangement, the proposed schemes assign optimal code redundancies to the source elements, subject to a constraint on transmission length. Experimental results demonstrate the superiority of the proposed strategies for correctly modelling the dependency structure. The last contribution of this thesis is to propose a scheme for optimizing protection of scalable data where limited retransmission is possible. Previous work assumed that retransmission is not possible. For most real-time or interactive applications, however, retransmission of lost data may be possible up to some limit. In the present work we restrict our attention to streaming sources (e.g., video) where each source element can be transmitted in one or both of two time slots. An optimization algorithm determines the transmission and level of protection for each source element, using information about the success of earlier transmissions. Experimental results confirm the benefit of limited retransmission.
68

Partial persistent sequences and their applications to collaborative text document editing and processing

Wu, Qinyi 08 July 2011 (has links)
In a variety of text document editing and processing applications, it is necessary to keep track of the revision history of text documents by recording changes and the metadata of those changes (e.g., user names and modification timestamps). The recent Web 2.0 document editing and processing applications, such as real-time collaborative note taking and wikis, require fine-grained shared access to collaborative text documents as well as efficient retrieval of metadata associated with different parts of collaborative text documents. Current revision control techniques only support coarse-grained shared access and are inefficient to retrieve metadata of changes at the sub-document granularity. In this dissertation, we design and implement partial persistent sequences (PPSs) to support real-time collaborations and manage metadata of changes at fine granularities for collaborative text document editing and processing applications. As a persistent data structure, PPSs have two important features. First, items in the data structure are never removed. We maintain necessary timestamp information to keep track of both inserted and deleted items and use the timestamp information to reconstruct the state of a document at any point in time. Second, PPSs create unique, persistent, and ordered identifiers for items of a document at fine granularities (e.g., a word or a sentence). As a result, we are able to support consistent and fine-grained shared access to collaborative text documents by detecting and resolving editing conflicts based on the revision history as well as to efficiently index and retrieve metadata associated with different parts of collaborative text documents. We demonstrate the capabilities of PPSs through two important problems in collaborative text document editing and processing applications: data consistency control and fine-grained document provenance management. The first problem studies how to detect and resolve editing conflicts in collaborative text document editing systems. We approach this problem in two steps. In the first step, we use PPSs to capture data dependencies between different editing operations and define a consistency model more suitable for real-time collaborative editing systems. In the second step, we extend our work to the entire spectrum of collaborations and adapt transactional techniques to build a flexible framework for the development of various collaborative editing systems. The generality of this framework is demonstrated by its capabilities to specify three different types of collaborations as exemplified in the systems of RCS, MediaWiki, and Google Docs respectively. We precisely specify the programming interfaces of this framework and describe a prototype implementation over Oracle Berkeley DB High Availability, a replicated database management engine. The second problem of fine-grained document provenance management studies how to efficiently index and retrieve fine-grained metadata for different parts of collaborative text documents. We use PPSs to design both disk-economic and computation-efficient techniques to index provenance data for millions of Wikipedia articles. Our approach is disk economic because we only save a few full versions of a document and only keep delta changes between those full versions. Our approach is also computation-efficient because we avoid the necessity of parsing the revision history of collaborative documents to retrieve fine-grained metadata. Compared to MediaWiki, the revision control system for Wikipedia, our system uses less than 10% of disk space and achieves at least an order of magnitude speed-up to retrieve fine-grained metadata for documents with thousands of revisions.
69

Intelligent text recognition system on a heterogeneous multi-core processor cluster a performance profile and architecture exploration /

Ritholtz, Lee. January 2009 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Electrical and Computer Engineering, 2009. / Includes bibliographical references.
70

Intention-driven textual semantic analysis

Li, Jie. January 2008 (has links)
Thesis (M.Comp.Sc.-Res.)--University of Wollongong, 2008. / Typescript. Includes bibliographical references: leaf 84-95.

Page generated in 0.5378 seconds