• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 39
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 122
  • 122
  • 31
  • 29
  • 28
  • 20
  • 19
  • 19
  • 18
  • 18
  • 17
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

A human spatial-chromatic vision model for evaluating electronic displays

Lloyd, Charles J. C. 19 October 2005 (has links)
This dissertation examines those attributes of full-color display systems (particularly color matrix displays) which degrade image quality. Based on this analysis, it is suggested that a comprehensive metric should measure image quality in terms of transmitted signal and noise modulation, both achromatic and chromatic. Moreover, it is suggested that these signal and noise measurements be weighted in terms of human spatial-chromatic visual characteristics. A review of extant image quality metrics reveals several limitations of these metrics which make them unsuitable for the evaluation of color matrix displays. These limitations include the inability to account for chromatic modulation transfer and chromatic noise as well as the general inability to account for spatial and grey-scale sampling. This work describes a new methodology for assessing image quality that can be applied to full-color as well as monochromatic, and sampled as well as continuous, display systems. Unlike most display quality metrics, the proposed methodology is not based on the tools of linear systems analysis. Rather, it is based on more veridical models of the human visual system (HVS), including multi-channel models of spatial vision, the zone theory of color vision, physiological models of retinal processes, and models of the optics of the eye. A display evaluation system consisting of the HVS model used in conjunction with a display simulator is described. The HVS model employs nine image processing stages to account for nonlinear retinal processes, opponent color encoding, and multiple spatial frequency channels. A detailed procedure for using the HVS model to evaluate display systems is provided. The validity of the HVS model was tested by conducting contrast detection, discrimination, and magnitude estimation experiments on the model. The results of these experiments correspond closely with published human performance data The utility of the display evaluation system was assessed by making image quality predictions for the display systems used in three image quality studies. Image quality predictions using the proposed system correlate strongly with ratings of image quality provided by human subjects. Results of these validation studies indicate that the proposed method of display evaluation is viable and warrants further development. / Ph. D.
52

Effects of depth cues on depth judgements using a field-sequential stereoscopic CRT display

Reinhart, William Frank 13 July 2007 (has links)
Current interest in three-dimensional (3-D) information displays has focused on the use of field-sequential CRT techniques to present binocular stereoscopic images. Although it is widely believed that stereopsis provides a potent depth information cue, numerous monocular cues exist which may augment, detract from, or even supplant stereopsis. Unfortunately, few guidelines or well-controlled analyses on the use of depth cues are available to direct engineering implementations of stereoscopic display systems. This dissertation describes three experiments using 3-D images presented on a Tektronix SGS 620 field-sequential stereoscopic CRT (19-inch diagonal, 120-Hz field rate, passive glasses). In the first experiment, 10 participants with normal vision judged the relative apparent depth ordering of three simple geometric figures (planar circle, square, and triangle). Four sources of depth information (cue types) were factorially combined to construct exemplary images of planar figures in apparent depth: Relative Size (angular subtense decreased with increasing apparent depth); Disparity (binocular disparity varied from crossed to uncrossed with increasing apparent depth); Interposition (closer figures partially occluded ones farther away in apparent depth); and Luminance (luminance decreased with increasing apparent depth). The three monocular cues (Interposition, Size, and Luminance) produced significantly faster depth judgments when used alone; however, when used in combination, Interposition dominated the response time data trends. Although the Disparity cue received moderately high "perceived effectiveness" ratings, response time measures indicated that it played a minor role in the relative depth judgment task. The second experiment was conducted to investigate further the subjective value of the various depth cues. Participants rated subjective image quality (quality of depth) rather than making rapid relative depth judgements. As anticipated, the most satisfactory ratings of depth were made for display images which included stereoscopic depth (Disparity), with the very highest ratings given to display images which included all four depth cues. The results of these first two experiments illustrated a task-demand (objective vs. subjective) discrepancy in the utility of stereoscopic depth cues. The third experiment extended the initial work to include more geometrically complex stimuli in visual search and cursor positioning tasks. In these task environments, stereoscopic disparity and monocular depth cues had an interactive effect on improving visual search times and reducing cursor positioning errors on the depth axis, with the best performance associated with the presence of all depth cues. The complementary nature of these effects was attenuated when depth cue salience was elevated to suprathreshold levels. Based on the results of this research, recommendations are presented for the display of depth information with the stereoscopic CRT. The importance of this research is underscored by the fact that while technological advances have been made in the field of stereoscopic display, very few usability data exist either from laboratory testing or from the implementation of such displays in operational systems. This research provides information to complete cost/performance benefit analyses for 3-D display designs which could in turn significantly impact industry acceptance of the field-sequential stereoscopic CRT. / Ph. D.
53

Reliable goal-directed reactive control of autonomous mobile robots

Gat, Erann 28 July 2008 (has links)
This dissertation demonstrates that effective control of autonomous mobile robots in real-world environments can be achieved by combining reactive and deliberative components into an integrated architecture. The reactive component allows the robot to respond to contingencies in real time. Deliberation allows the robot to make effective predictions about the world. By using different computational mechanisms for the reactive and deliberative components, much existing deliberative technology can be effectively incorporated into a mobile robot control system. The dissertation describes the design and implementation of a reactive control system for an autonomous mobile robot which is explicitly designed to interface to a deliberative component A programming language called ALF A is developed to program this system. The design of a control architecture which incorporates this reactive system is also described. The architecture is heterogeneous and asynchronous, that is, it consists of components which are structured differently from one another, and which operate in parallel. This prevents slow deliberative computations from adversely affecting the response time of the overall system. The architecture produces behavior which is reliable and goal-directed, yet reactive to contingencies, in the face of noise, limited computational resources, and an unpredictable environment. The system described in this dissertation has been used to control three real robots and a simulated robot performing a variety of tasks in real-world and simulated real-world environments. A general design methodology based upon bottom-up hierarchical decomposition is demonstrated. The methodology is based on the principle of cognizant failure, that is, that low-level activities should be designed in a way as to detect failures and state transitions at high levels of abstraction. Furthermore, the results of deliberative computations should be used to guide the robot's actions, but not to control those actions directly. / Ph. D.
54

Effects of stimulus class on short-term memory workload in complex information display formats

Tan, Kay Chuan 28 July 2008 (has links)
The objective of this research effort was to identify opportunities and demonstrate methods to reduce aircraft crew member cognitive workload (CWL) by reducing short-term memory (STM) demand. Two experiments qualitatively and quantitatively compared memory loading as a function of stimulus class. Experiment 1 employed a dual-task paradigm where the primary task was compensatory tracking used to load STM and the secondary task was item recognition using the Sternberg paradigm. Experiment 2 employed a singletask paradigm using a modified version of the Sternberg task. Digits, letters, colors, words, and geometrical shapes were tested as memory-set (MSET) items in the Sternberg task. Recognition latency and error rate served as objective measures of STM performance while the Subjective Workload Assessment Technique (SWAT) was employed as a Subjective second measure. Root Mean Square error was used to gauge tracking performance. Analyses of the experiments' results revealed that recognition latency and SWAT ratings Statistically varied as functions of stimulus class, MSET size, and the interaction between stimulus class and MSET size. Error rate was not statistically different across stimulus class or MSET size. Post-hoc analyses found SWAT to be a more sensitive STM measurement instrument than recognition latency or error rate. No statistically significant degree of secondary task intrusion on the tracking task was found. In addition to the commonly used classes of digits and letters, this research demonstrated that colors, words, and geometrical shapes could also be utilized as MSET items in short-term memory workload investigations. This research has, more importantly, provided further support for the vital link between STM demand and perceived workload. The main conclusion of this research is that stimulus class optimization can be a feasible method for reducing STM demand. Differences in processing rate among stimulus classes are large enough to impact visual display design. For many context-specific applications, it should be possible to determine the most efficient stimulus class in which to portray the needed information. The findings of this research are especially applicable in situations of elevated STM demand (e.g., aviation systems operations). In general, however, the results provide helpful information for visual display designers. / Ph. D.
55

Recognition of aerospace acoustic sources using advanced pattern recognition techniques

Scott, Emily A. 02 March 2010 (has links)
An acoustic pattern recognition system has been developed to identify aerospace acoustic sources. The system is capable of classifying five different types of air and ground sources: jets, propeller planes, helicopters, trains, and wind turbines. The system consists of one microphone for data acquisition, a preprocessor, a feature selector, and a classifier. This thesis presents two new classifiers, one based on an associative memory and one on artificial neural networks, and compares their performance to that of the original classifier developed at VPI&SU (1,2). The acoustic patterns are classified using features that have been calculated from the time and frequency domains. Each of the classifiers undergoes a training period during which a set of known patterns is used to teach the classifier to classify unknown patterns correctly. Once training was completed each classifier is tested using a new set of unknown data. Two different classifier structures were tested, a single level structure and a tree structure. Results show that the single level associative memory and artificial neural network classifiers each identified 90.6 percent of the acoustic sources correctly. The original linear discriminant function single level classifier (1,2) identified 86.7 percent of the sources. The tree structure classifiers classified respectively 90.6 percent, 91.8 percent, and 90.1 percent of the sources correctly. / Master of Science
56

A temporal analysis of natural language narrative text

Ramachandran, Venkateshwaran 12 March 2009 (has links)
Written English texts in the form of narratives often describe events that occur in definite chronological sequence. Understanding the concept of time in such texts is an essential aspect of text comprehension and forms the basis for answering time related questions pertaining to the source text. It is our hypothesis that time in such texts is expressed in terms of temporal orderings of the situations described and can be modelled by a linear representation of these situations. This representation conforms to the traditional view of the linearity of time where it is regarded as a horizontal line called the timeline. Information indicating the temporal ordering of events is often explicitly specified in the source text. Where such indicators are missing, semantic relations between the events enforce temporal orderings. This thesis proposes and implements a practical model for automatically processing paragraphs of narrative fiction for explicit chronological information and employing certain guidelines for inferring such information in the absence of explicit indications. Although we cannot claim to have altogether eliminated the need for expensive semantic inferencing within our model, we have certainly devised guidelines to eliminate the expense in certain cases where explicit temporal indicators are missing. We have also characterized some cases through our test data where semantic inferencing proves necessary to augment the capabilities of our model. / Master of Science
57

An interactive PHIGS+ model rendering system applied to postprocessing of spatial mechanisms

Montgomery, David Eric 24 March 2009 (has links)
This thesis presents the concept, development, and use of PriSM (Postprocessing of Spatial Mechanisms), an interactive 3-D graphical postprocessor for spatial mechanism synthesis and design programs. This device-independent system provides visualization, modeling, and animation of spatial mechanisms. New ideas and methods are described to simplify the interactive specification of scene rendering and color parameters using the international ISO standard for 3-D graphics, PHIGS (Programmer’s Hierarchical Interactive Graphic System), and its proposed extensions, PHIGS+. Perception and evaluation of spatial mechanism designs are significantly improved by the use of PHIGS+ functionality to produce animated models that are shaded, lighted, and depth cued. Examples are presented for the rendering and animation of spatial mechanisms on a Raster Technologies (Alliant) GX4000 workstation with a hardware-based PHIGS+ graphics subsystem, UNIX, NeWS, and C. In addition to color photographs and grayscale bitmaps of the PriSM implementation, the program structure and source code listing are fully documented. / Master of Science
58

Assessing human performance trade-offs of a telephone-based information system

Wu, Jimmy K. K. January 1989 (has links)
Little research effort has been devoted to human interaction with telephone information systems. This study investigated the effects of system parameters and user characteristics on human behavior in an interactive telephone-based information system. The research method utilized a centraI-composite design to study four variables at five levels each. The four factors manipulated were: synthesized speech rate, time available for user input, subject age, and background music level. Subjects searched a fictitious department store database for 16 specific store items and transcribed 16 information messages which were spoken by a computer speech synthesizer. Subjective ratings of certain features of the system were solicited from the subjects and performance measures were also collected from the subjects on an on-line basis. Performance was evaluated by calculating regression equations relating the dependent measures and the independent variables. A response surface was plotted, and optimal settings for the Information system were also calculated. Two seconds was found to be an optimal time for users to enter their selection. The computer synthesized speech rate should be set close to 120-150 words per minute. Background music or noise level should be kept below 50 dB(A); sound level above 50 dB(A) seriously affected user's ability to understand synthetic speech. Younger subjects (age 14 - 22) performed better in this study than older subjects (age 36- 62). / Master of Science / incomplete_metadata
59

Evaluation of several techniques for enhancing speech degraded by additive noise in mobile radio environments

Liberti, Joseph C. 10 October 2009 (has links)
This thesis presents a study of several algorithms for enhancing speech degraded by additive noise in mobile cellular communications. The primary goal of this multi-stage study was to examine adaptive noise cancellation techniques in which one microphone is used to measure the speech plus noise signal and another microphone is used to form an estimate of the interfering background noise. The first stage of this research project involved the design and operation of a measurement system to gather dual channel audio samples in mobile radio environments for use in testing adaptive noise cancellation algorithms developed at Northeastern University. In the second phase of this research, several adaptive algorithms were used to implement noise cancellation systems which were applied to the measured speech signals. In the third phase of this research, several of the adaptive noise cancellation algorithms are compared and additional speech enhancement techniques are investigated. / Master of Science
60

Methodology to determine performance of a group technology design cell on the basis of performance measures

Tank, Rajul 24 October 2009 (has links)
There are a large number of Group Technology (GT) based cell formation techniques in the literature, but their applications rare. It is hypothesized that the reason behind the lack of applications of these techniques in practice, is "fear of the unknown”. There have been a very limited number of attempts to determine the performance of any of the cell formation techniques. This thesis attempts to demonstrate a method to determine the performance of cell formation techniques by measuring the physical performance of the manufacturing cell. The methodology involves a manual evaluative approach to determine the cell performance from the data given for the system. The methodology presents selection of important Performance Measures (PMs), data requirement for the measurement of PMs and cell formation technique analysis. The performance measures to determine the performance of these techniques were selected according to their importance to the productivity of the manufacturing cell and their significance among GT principles. The cell formation techniques selected to demonstrate the method are Rank Order Clustering algorithm (ROC) and Production Flow Analysis (PFA). Using ROC and PFA, part families and machines groups were formed creating cell layouts. From the given data, performance measure values were calculated for a functional layout as well as ROC and PFA layouts. Performance of ROC and PFA layouts were compared to each other and to the functional layout. Results from the example show that performance improvement can be achieved by the two cell formation techniques in all the performance measures category except in flexibility. Performance of ROC and PFA are the same in the categories of setup time, machine utilization. and flexibility. The reason being, similar machine groupings and part families were achieved by both techniques for this example. Material handling performance and flexibility are dependent largely on machine grouping, whereas setup time is dependent on part families. Machine utilization and work-in-process are dependent on machine groups as well as part families. It appears PFA would have better performance in cases of complex problems having large number of machines and parts due to its comprehensiveness and ability to group machines according to the parts’ processing similarities. The advantage of ROC is mainly in its ease of application and rather elegant way of handling bottleneck machines and exceptional parts. Due to the lack of flexibility in GT layouts, system design and operation planning should be done carefully. / Master of Science

Page generated in 0.0594 seconds