• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 34
  • 22
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 342
  • 342
  • 152
  • 130
  • 63
  • 57
  • 54
  • 49
  • 49
  • 41
  • 35
  • 35
  • 33
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

An analysis of the effects of hue and display density on visual search performance

Loman, James M. January 1986 (has links)
This study investigated the effect of hue on performance under different types of display density. In the past, researchers have primarily conceptualized density in terms of the total number of filled characters on the screen (i.e., overall density). Recently, however, Tullis (1983) suggested this definition was too restrictive. In addition to overall density, he suggested that two other types of density be considered: local density, or the number of characters immediately surrounding the target; and grouping, or the number of perceptual groups formed by the display elements. The present research explored the effect of hue using each of the types of density in three separate experiments. Subjects were presented several profiles of job applicants on a computer screen and asked to select the applicant with the highest skill score. Density manipulations specific to each experiment were in terms of the number of profiles (Experiment 1- overall density x hue), intra-profile spacing (Experiment 2- local density x hue), and inter-profile spacing (Experiment 3- grouping x hue). In all three experiments, the presence of hue was either relevant or irrelevant to the task. Results of a series of 2 x 2 x 3 ANOVAs showed that the addition of hue had a significant effect on visual search performance. The direction and magnitude of this effect depended on the relevance of hue and the specific manipulation of display density. Findings were explained in terms of the Gestalt principles of organization and several recommendations were made for screen design. / M. S.
132

Examining the relationship between performance measures and user evaluations in a transfer of training paradigm

Coleman, William D. January 1985 (has links)
User evaluations which generate detailed information can identify problematic aspects of software interfaces. In a preliminary study (Coleman, Wixon, and Williges, 1984), a methodology was developed for the systematic collection of detailed subjective evaluations of software interfaces. This methodology created a taxonomy of editing functions for users to evaluate and a set of bipolar scales on which they could make their evaluations. The present research investigated the utility of this methodology, while comparing two text editors within the context of a benchmark editing task. In addition, the detailed subjective measures collected were compared with more traditional objective measures. The results of this research revealed that global subjective evaluations were insensitive to differences between two editors indicated by detailed evaluations. Examination of the detailed subjective evaluations indicated that the differences between editors could be 1 attributed to specific editing functions. The objective measures also indicated very specific differences between the two evaluated editors. Examination of the relationship between the objective and subjective measures indicated that the measures differed on both the magnitude and location of effects. Closer inspection of the data revealed that insensitivity on the part of the subjective measure could not account for all disagreement between measures. On several occasions the objective and subjective measures seemed to measure qualitatively different effects. Given that the measures were not completely redundant it was concluded that both objective and subjective measures should be collected during interface evaluation. / Master of Science / incomplete_metadata
133

Definition and validation of interface complexity metrics

Mayo, Kevin A. 29 November 2012 (has links)
A software system is an aggregate of communicating modules, and there are several different types of communication among these modules (direct, indirect, and global). Therefore, understanding the interfaces among these modules can characterize the system and are a major factor in the system's complexity. These interfaces could possibly also show and predict inadequacies in the reliability and maintenance of a system. Interfaces are defined early in the development life cycle at a detailed or high level design stage. Knowing that these interfaces exist and their structure leads us to measure them for an indication of the designed interface complexity. This designed interface complexity can then be utilized for software quality assurance by allowing users to choose from among several designs. With data provided by an Ada software developer, the interface complexity metrics correlated with established metrics, but also found complex interfaces that established metrics missed. / Master of Science
134

IDEAL: a tool to enable usability specification and evaluation

Ashlund, Stacey Lynn 05 December 2009 (has links)
While interactive design tools, rapid prototyping tools, and user interface management systems (UIMSs) are advancing as cost-effective ways of producing interfaces, attention to usability is rarely incorporated into such tools. The advancement of producing interfaces more rapidly without addressing their quality is of limited worth. This thesis reports on the design and prototype implementation of a software tool, IDEAL (Interface Design Environment and Analysis Lattice), that encourages and enables user-centered design as an integral part of the user interface development process. IDEAL integrates usability engineering techniques and behavioral task representations with a graphical hierarchy of associated user tasks to support formative evaluation of an evolving user interface. IDEAL supplements the functionality of current interface construction tools by focusing on usability through user-centered design. IDEAL was designed and developed using the techniques it supports: formative evaluation and iterative refinement. Representative users participated in two phases of qualitative formative evaluations from which critical incidents, verbal protocol, and qualitative data were collected. Feedback from each phase contributed to the revised design of IDEAL. This empirical evaluation showed IDEAL to be useful as an automated tool for managing the interrelated tasks of interface development, including design, usability specification definition, and formative evaluation, that are currently performed manually (e.g., using pencil and paper.) / Master of Science
135

A marco system for English-like commands

Crowl, Lawrence Alan January 1983 (has links)
This thesis describes a system that allows the designer of an interface between humans and computers to implement an English-like set of commands easily. Designers model commands on imperative English sentences containing verbs, direct and indirect objects, adverbs, adjectives, and prepositional phrases. The system restricts the ordering and presence command components as little as possible while still allowing the commands to be analyzed in a deterministic manner. Thus, commands formed with the system can mirror not only the form of English, but the flexibility of English as well. This system is based on the macro expansion of a command into a base language, and therefore has only the expressive power of the base language. Because of this, the system can only aid in the definition of commands, and not in the definition of a language. The language for describing the commands is structural in nature. The programmer defines the parts of a command instead of the syntax of the command. With each part of the command, the programmer defines sections of code that will be generated at various points during the recognition of the command. An implementation can execute generated code at some later time, at the end of the translation, or during the expansion. A prototype implementation expands of a file of commands into the VAX/VMS Digital Command Language. / M.S.
136

WIMP and Beyond: The Origins, Evolution, and Awaited Future of User Interface Design

Yang, Grant 01 January 2015 (has links)
The field of computer user interface design is rapidly changing and diversifying as new devices are developed every day. Technology has risen to become an integral part of life for people of all ages around the world. Modern life as we know it depends on computers, and understanding the interfaces through which we communicate with them is critically important in an increasingly digital age. The first part of this paper examines the technological origins and historical background driving the development of graphical user interfaces from its earliest incarnations to today. Hardware advancements and key turning points are presented and discussed. In the second part of this paper, skeuomorphism and flat design, two of the most common design trends today, are analyzed and explained. Finally, the future course of user interface is predicted based off of emergent technologies such as the Apple Watch, Google Glass, Microsoft HoloLens, and Microsoft PixelSense. Through understanding the roots and current state of computer user interface design, engineers, designers, and scientists can help us get the most out of our ever-changing world of advanced technology as it becomes further intertwined with our existence.
137

Probabilistic modeling of neural data for analysis and synthesis of speech

Matthews, Brett Alexander 13 August 2012 (has links)
This research consists of probabilistic modeling of speech audio signals and deep-brain neurological signals in brain-computer interfaces. A significant portion of this research consists of a collaborative effort with Neural Signals Inc., Duluth, GA, and Boston University to develop an intracortical neural prosthetic system for speech restoration in a human subject living with Locked-In Syndrome, i.e., he is paralyzed and unable to speak. The work is carried out in three major phases. We first use kernel-based classifiers to detect evidence of articulation gestures and phonological attributes speech audio signals. We demonstrate that articulatory information can be used to decode speech content in speech audio signals. In the second phase of the research, we use neurological signals collected from a human subject with Locked-In Syndrome to predict intended speech content. The neural data were collected with a microwire electrode surgically implanted in speech motor cortex of the subject's brain, with the implant location chosen to capture extracellular electric potentials related to speech motor activity. The data include extracellular traces, and firing occurrence times for neural clusters in the vicinity of the electrode identified by an expert. We compute continuous firing rate estimates for the ensemble of neural clusters using several rate estimation methods and apply statistical classifiers to the rate estimates to predict intended speech content. We use Gaussian mixture models to classify short frames of data into 5 vowel classes and to discriminate intended speech activity in the data from non-speech. We then perform a series of data collection experiments with the subject designed to test explicitly for several speech articulation gestures, and decode the data offline. Finally, in the third phase of the research we develop an original probabilistic method for the task of spike-sorting in intracortical brain-computer interfaces, i.e., identifying and distinguishing action potential waveforms in extracellular traces. Our method uses both action potential waveforms and their occurrence times to cluster the data. We apply the method to semi-artificial data and partially labeled real data. We then classify neural spike waveforms, modeled with single multivariate Gaussians, using the method of minimum classification error for parameter estimation. Finally, we apply our joint waveforms and occurrence times spike-sorting method to neurological data in the context of a neural prosthesis for speech.
138

Bone Erosion Measurement in Subjects with Rheumatoid Arthritis Using Magnetic Resonance Imaging

Emond, Patrick D. 04 1900 (has links)
<p>Rheumatoid arthritis (RA) is a systemic disease that can affect the nervous system, lungs, heart, skin, reticuloendothelium and joints. Currently, the gold-standard measurement for tracking the progression of the disease involves a semi-quantitative assessment of bone erosion, bone marrow edema and synovitis, as seen in magnetic resonance (MR) images, by a musculoskeletal radiologist. The work presented in this thesis identifies how computer automation can be used to quantify bone erosion volumes in MR images without a radiologists' expert and time consuming intervention. A new semi-automated hybrid segmentation algorithm that combines two established techniques: region growing and level-set segmentation, is described and evaluated for use in a clinical setting. A total of 40 participants with RA were scanned using a 1-Tesla peripheral MR scanner. Eight of the participant scans were used to train the algorithm with the remaining used to determine the accuracy, precision, and speed of the technique. The reproducibility of the hybrid algorithm and that of manual segmentation were defined in terms of intra-class correlation coefficients (ICCs). Both techniques were equally precise with ICC values greater than 0.9. According to a least squares fit between erosion volumes obtained by the hybrid algorithm with those obtained from manual tracings drawn by a radiologist, the former was found to be highly accurate ( m=1.030, b=1.385: r-squared=0.923). The hybrid algorithm was significantly faster than manual segmentation, which took two to four times longer to complete. In conclusion, computer automation shows promise as a means to quantitatively assess bone erosion volumes. The new hybrid segmentation algorithm described in this thesis could be used in a clinical setting to track the progression of RA and to evaluate the effectiveness of treatment.</p> / Doctor of Philosophy (PhD)
139

A parallel adapter for a high-speed serial bus

Gray, Terrence Patrick, 1954- January 1989 (has links)
This paper describes the building of a parallel converter for a high-speed serial bus. The high-speed serial port of the Macintosh personal computer is used to implement the bus, while an MC68000 Educational Computer Board is used to perform the serial-to-parallel conversion. The device's performance is evaluated, and possible methods for improving its performance are discussed.
140

Digital Forensics Tool Interface Visualization

Altiero, Roberto A. 15 January 2015 (has links)
Recent trends show digital devices utilized with increasing frequency in most crimes committed. Investigating crime involving these devices is labor-intensive for the practitioner applying digital forensics tools that present possible evidence with results displayed in tabular lists for manual review. This research investigates how enhanced digital forensics tool interface visualization techniques can be shown to improve the investigator's cognitive capacities to discover criminal evidence more efficiently. This paper presents visualization graphs and contrasts their properties with the outputs of The Sleuth Kit (TSK) digital forensic program. Exhibited is the textual-based interface proving the effectiveness of enhanced data presentation. Further demonstrated is the potential of the computer interface to present to the digital forensic practitioner an abstract, graphic view of an entire dataset of computer files. Enhanced interface design of digital forensic tools means more rapidly linking suspicious evidence to a perpetrator. Introduced in this study is a mixed methodology of ethnography and cognitive load measures. Ethnographically defined tasks developed from the interviews of digital forensics subject matter experts (SME) shape the context for cognitive measures. Cognitive load testing of digital forensics first-responders utilizing both a textual-based and visualized-based application established a quantitative mean of the mental workload during operation of the applications under test. A t-test correlating the dependent samples' mean tested for the null hypothesis of less than a significant value between the applications' comparative workloads of the operators. Results of the study indicate a significant value, affirming the hypothesis that a visualized application would reduce the cognitive workload of the first-responder analyst. With the supported hypothesis, this work contributes to the body of knowledge by validating a method of measurement and by providing empirical evidence that the use of the visualized digital forensics interface will provide a more efficient performance by the analyst, saving labor costs and compressing time required for the discovery phase of a digital investigation.

Page generated in 0.3979 seconds