• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • Tagged with
  • 51
  • 51
  • 51
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Modflows| Methods for studying and managing mesh editing workflows

Denning, Jonathan D. 16 October 2014 (has links)
<p> At the heart of computer games and computer generated films lies 3D content creation. A student wanting to learn how to create and edit 3D meshes can quickly find thousands of videos explaining the workflow process. These videos are a popular medium due to a simple setup that minimally interrupts the artist's workflow, but video recordings can be quite challenging to watch. Typical mesh editing sessions involve several hours of work and thousands of operations, which means the video recording can be too long to stay interesting if played back at real-time speed or lose too much information when sped up. Moreover, regardless of the playback speed, a high-level overview is quite difficult to construct from long editing sessions. </p><p> In this thesis, we present our research into methods for studying how artists create and edit polygonal models and for helping manage collaborative work. We start by describing two approaches to automatically summarizing long editing workflows to provide a high-level overview as well as details on demand. The summarized results are presented in an interactive viewer with many features, including overlaying visual annotations to indicate the artist's actions, coloring regions to indicate strength of change, and filtering the workflow to specific 3D regions of interest. We evaluate the robustness of our two approaches by testing against a variety of workflows, holding a small case study, and asking artists for feedback. </p><p> Next we describe a way to construct a plausible and intuitive low-level workflow that turns one of two given meshes into the second by building mesh correspondences. Analogous to text version control tools, we visualize the mesh changes in a two-way, three-way, or sequence diff, and we demonstrate how to merge independent edits of a single original mesh, handling conflicts in a way that preserves the artists' original intentions. </p><p> We then discuss methods of comparing multiple artists performing similar mesh editing tasks. We build intra- and inter-correspondences, compute pairwise edit distances, and then visualize the distances as a heat map or by embedding into 3D space. We evaluate our methods by asking a professional artist and instructor for feedback. </p><p> Finally, we discuss possible future directions for this research.</p>
32

Designing an Exploratory Text Analysis Tool for Humanities and Social Sciences Research

Shrikumar, Aditi 28 May 2014 (has links)
<p> This dissertation presents a new tool for exploratory text analysis that attempts to improve the experience of navigating and exploring text and its metadata. The design of the tool was motivated by the unmet need for text analysis tools in the humanities and social sciences. In these fields, it is common for scholars to have hundreds or thousands of text-based source documents of interest from which they extract evidence for complex arguments about society and culture. These collections are difficult to make sense of and navigate. Unlike numerical data, text cannot be condensed, overviewed, and summarized in an automated fashion without losing significant information. And the metadata that accompanies the documents &ndash; often from library records &ndash; does not capture the varied content of the text within. </p><p> Furthermore, adoption of computational tools remains low among these scholars despite such tools having existed for decades. A recent study found that the main culprits were poor user interfaces and lack of communication between tool builders and tool users. We therefore took an iterative, user-centered approach to the development of the tool. From reports of classroom usage, and interviews with scholars, we developed a descriptive model of the text analysis process, and extracted design guidelines for text analysis systems. These guidelines recommend showing overviews of both the content and metadata of a collection, allowing users to separate and compare subsets of data according to combinations of searches and metadata filters, allowing users to collect phrases, sentences, and documents into custom groups for analysis, making the usage context of words easy to see without interrupting the current activity, and making it easy to switch between different visualizations of the same data. </p><p> WordSeer, the system we implemented, supports highly flexible slicing and dicing, as well as easier transitions than in other tool between visual analyses, drill-downs, lateral explorations and overviews of slices in a text collection. The tool uses techniques from computational linguistics, information retrieval and data visualization. </p><p> The contributions of this dissertation are the following. First, the design and source code of WordSeer Version 3, an exploratory text analysis system. Unlike other current systems for this audience, WordSeer 3 supports collecting evidence, isolating and analyzing sub-sets of a collection, making comparisons based on collected items, and exploring a new idea without interrupting the current task. Second, we give a descriptive model of how humanities and social science scholars undertake exploratory text analysis during the course of their work. We also identify pain points in their current workflows and give suggestions on how systems can address these problems. Third, we describe a set of design principles for text analysis systems aimed at addressing these pain points. For validation, we contribute a set of three real-world examples of scholars using WordSeer 3, which was designed according to those principles. As a measure of success, we show how the scholars were able to conduct analyses yielding otherwise inaccessible results useful to their research.</p>
33

An investigation of data privacy and utility using machine learning as a gauge

Mivule, Kato 18 June 2014 (has links)
<p> The purpose of this investigation is to study and pursue a user-defined approach in preserving data privacy while maintaining an acceptable level of data utility using machine learning classification techniques as a gauge in the generation of synthetic data sets. This dissertation will deal with data privacy, data utility, machine learning classification, and the generation of synthetic data sets. Hence, data privacy and utility preservation using machine learning classification as a gauge is the central focus of this study. Many organizations that transact in large amounts of data have to comply with state, federal, and international laws to guarantee that the privacy of individuals and other sensitive data is not compromised. Yet at some point during the data privacy process, data loses its utility - a measure of how useful a privatized dataset is to the user of that dataset. Data privacy researchers have documented that attaining an optimal balance between data privacy and utility is an NP-hard challenge, thus an intractable problem. Therefore we propose the classification error gauge (x-CEG) approach, a data utility quantification concept that employs machine learning classification techniques to gauge data utility based on the classification error. In the initial phase of this proposed approach, a data privacy algorithm such as differential privacy, Gaussian noise addition, generalization, and or k-anonymity is applied on a dataset for confidentiality, generating a privatized synthetic data set. The privatized synthetic data set is then passed through a machine learning classifier, after which the classification error is measured. If the classification error is lower or equal to a set threshold, then better utility might be achieved, otherwise, adjustment to the data privacy parameters is made and then the refined synthetic data set is sent to the machine learning classifier; the process repeats until the error threshold is reached. Additionally, this study presents the Comparative x-CEG concept, in which a privatized synthetic data set is passed through a series of classifiers, each of which returns a classification error, and the classifier with the lowest classification error is chosen after parameter adjustments, an indication of better data utility. Preliminary results from this investigation show that fine-tuning parameters in data privacy procedures, for example in the case of differential privacy, and increasing weak learners in the ensemble classifier for instance, might lead to lower classification error, thus better utility. Furthermore, this study explores the application of this approach by employing signal processing techniques in the generation of privatized synthetic data sets and improving data utility. This dissertation presents theoretical and empirical work examining various data privacy and utility methodologies using machine learning classification as a gauge. Similarly this study presents a resourceful approach in the generation of privatized synthetic data sets, and an innovative conceptual framework for the data privacy engineering process.</p>
34

Mispronunciation detection for language learning and speech recognition adaptation

Ge, Zhenhao 11 April 2014 (has links)
<p> The areas of "mispronunciation detection" (or "accent detection" more specifically) within the speech recognition community are receiving increased attention now. Two application areas, namely language learning and speech recognition adaptation, are largely driving this research interest and are the focal points of this work. </p><p> There are a number of Computer Aided Language Learning (CALL) systems with Computer Aided Pronunciation Training (CAPT) techniques that have been developed. In this thesis, a new HMM-based text-dependent mispronunciation system is introduced using text Adaptive Frequency Cepstral Coefficients (AFCCs). It is shown that this system outperforms the conventional HMM method based on Mel Frequency Cepstral Coefficients (MFCCs). In addition, a mispronunciation detection and classification algorithm based on Principle Component Analysis (PCA) is introduced to help language learners identify and correct their pronunciation errors at the word and syllable levels. </p><p> To improve speech recognition by adaptation, two projects have been explored. The first one improves name recognition by learning acceptable variations in name pronunciations, as one of the approaches to make grammar-based name recognition adaptive. The second project is accent detection by examining the shifting of fundamental vowels in accented speech. This approach uses both acoustic and phonetic information to detect accents and is shown to be beneficial with accented English. These applications can be integrated into an automated international calling system, to improve recognition of callers' names and speech. It determines the callers' accent based in a short period of speech. Once the type of accents is detected, it switches from the standard speech recognition engine to an accent-adaptive one for better recognition results.</p>
35

Validating the OCTAVE Allegro Information Systems Risk Assessment Methodology| A Case Study

Keating, Corland G. 22 March 2014 (has links)
<p> An information system (IS) risk assessment is an important part of any successful security management strategy. Risk assessments help organizations to identify mission-critical IS assets and prioritize risk mitigation efforts. Many risk assessment methodologies, however, are complex and can only be completed successfully by highly qualified and experienced security experts. Small-sized organizations, including small-sized colleges and universities, due to their financial constraints and lack of IS security expertise, are challenged to conduct a risk assessment. Therefore, most small-sized colleges and universities do not perform IS risk assessments, which leaves the institution's data vulnerable to security incursions. The negative consequences of a security breach at these institutions can include a decline in the institution's reputation, loss of financial revenue, and exposure to lawsuits. </p><p> The goal of this research is to address the challenge of conducting IS risk assessments in small-sized colleges and universities by validating the use of the Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) Allegro risk assessment methodology at a small-sized university. OCTAVE Allegro is a streamlined risk assessment method created by Carnegie Mellon University's Software Engineering Institute. OCTAVE Allegro has the ability to provide robust risk assessment results, with a relatively small investment in time and resources, even for those organizations that do not have extensive risk management expertise. </p><p> The successful use of OCTAVE Allegro was validated using a case study that documented the process and outcome of conducting a risk assessment at George Fox University (GFU), a small-sized, private university located in Newberg, Oregon. GFU has the typical constraints of other small-sized universities; it has a relatively small information technology staff with limited expertise in conducting IS risk assessments and lacks a dedicated IS risk manager. Nevertheless, OCTAVE Allegro was relatively easy for GFU staff to understand, provided GFU with the ability to document the security requirements of their IS assets, helped to identify and evaluate IS security concerns, and provided an objective way to prioritize IS security projects. Thus, this research validates that OCTAVE Allegro is an appropriate and effective IS risk assessment method for small-sized colleges and universities.</p>
36

An evaluation of text classification methods for literary study /

Yu, Bei, January 2006 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006. / Source: Dissertation Abstracts International, Volume: 68-02, Section: A, page: 0387. Adviser: Linda Smith. Includes bibliographical references (leaves 108-115) Available on microfilm from Pro Quest Information and Learning.
37

Computational approaches to linguistic consensus /

Wang, Jun, January 2006 (has links)
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006. / Source: Dissertation Abstracts International, Volume: 68-02, Section: A, page: 0386. Adviser: Les Gasser. Includes bibliographical references (leaves 102-105) Available on microfilm from Pro Quest Information and Learning.
38

Bridging the semantic gap exploring descriptive vocabulary for image structure /

Beebe, Caroline. January 2006 (has links)
Thesis (Ph.D.)--Indiana University, School of Library and Information Science, 2006. / Source: Dissertation Abstracts International, Volume: 67-09, Section: A, page: 3205. Title from PDF t.p. (viewed Oct. 30, 2008). Adviser: Elin K. Jacob.
39

Bridging the semantic gap : exploring descriptive vocabulary for image structure /

Beebe, Caroline. January 2006 (has links)
Thesis (Ph.D.)--Indiana University, School of Library and Information Science, 2006. / Adviser: Elin K. Jacob.
40

Scalable Web service-based XML message brokering across organizations

Huang, Yi, January 2007 (has links)
Thesis (Ph.D.)--Indiana University, Computer Science Dept., 2007. / Title from dissertation home page (viewed Sept. 29, 2008). Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1103. Adviser: Dennis Gannon.

Page generated in 0.2325 seconds