91 |
Prediction and quantification of individual differences in susceptibility to simulator sickness in fixed-base simulatorsYoo, Young H. 01 January 1999 (has links)
No description available.
|
92 |
Potential stimulus contributions to counterchange determined motion perceptionUnknown Date (has links)
Prior research has explored the counterchange model of motion detection in terms of counterchanging information that originates in the stimulus foreground (or objects). These experiments explore counterchange apparent motion with regard to a new apparent motion stimulus where the necessary counterchanging information required for apparent motion is provided by altering the luminance of the background. It was found that apparent motion produced by background-counterchange requires longer frame durations and lower levels of average stimulus contrast compared to foreground-counterchange. Furthermore, inter-object distance does not influence apparent motion produced by background-counterchange to the degree it influences apparent motion produced by foreground-counterchange. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
|
93 |
Motion detection: a neural network approach.January 1992 (has links)
by Yip Pak Ching. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1992. / Includes bibliographical references (leaves 97-100). / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- The Objective of Machine Vision --- p.3 / Chapter 1.3 --- Our Goal --- p.4 / Chapter 1.4 --- Previous Works and Current Research --- p.5 / Chapter 1.5 --- Organization of the Thesis --- p.8 / Chapter 2 --- Human Movement Perception --- p.11 / Chapter 2.1 --- Basic Mechanisms of Vision --- p.11 / Chapter 2.2 --- Functions of Movement Perception --- p.12 / Chapter 2.3 --- Five Ways to Make a Spot of Light Appear to Move --- p.14 / Chapter 2.4 --- Real Movement --- p.15 / Chapter 2.5 --- Mechanisms for the Perception of Real Movement --- p.16 / Chapter 2.6 --- Apparent Motion --- p.18 / Chapter 3 --- Machine Movement Perception --- p.21 / Chapter 3.1 --- Introduction --- p.21 / Chapter 3.2 --- Perspective Transformation --- p.21 / Chapter 3.3 --- Motion Detection by Difference Image --- p.22 / Chapter 3.4 --- Accumulative Difference --- p.24 / Chapter 3.5 --- Establishing a Reference Image --- p.26 / Chapter 3.6 --- Optical Flow --- p.27 / Chapter 4 --- Neural Networks for Machine Vision --- p.30 / Chapter 4.1 --- Introduction --- p.30 / Chapter 4.2 --- Perceptron --- p.30 / Chapter 4.3 --- The Back-Propagation Training Algorithm --- p.33 / Chapter 4.4 --- Object Identification --- p.34 / Chapter 4.5 --- Special Technique for Improving the Learning Time and Recognition Rate --- p.36 / Chapter 5 --- Neural Networks by Supervised Learning for Motion Detection --- p.39 / Chapter 5.1 --- Introduction --- p.39 / Chapter 5.2 --- Three-Level Network Architecture --- p.40 / Chapter 5.3 --- Four-Level Network Architecture --- p.45 / Chapter 6 --- Rough Motion Detection --- p.50 / Chapter 6.1 --- Introduction --- p.50 / Chapter 6.2 --- The Rough Motion Detection Network --- p.51 / Chapter 6.3 --- The Correlation Network --- p.54 / Chapter 6.4 --- Modified Rough Motion Detection Network --- p.56 / Chapter 7 --- Moving Object Extraction --- p.59 / Chapter 7.1 --- Introduction --- p.59 / Chapter 7.2 --- Three Types of Images for Moving Object Extraction --- p.59 / Chapter 7.3 --- Edge Enhancement Network --- p.62 / Chapter 7.4 --- Background Remover --- p.63 / Chapter 8 --- Motion Parameter Extraction --- p.66 / Chapter 8.1 --- Introduction --- p.66 / Chapter 8.2 --- 2-D Motion Detection --- p.66 / Chapter 8.3 --- Normalization Network --- p.67 / Chapter 8.4 --- 3-D Motion Parameter Extraction --- p.70 / Chapter 8.5 --- Object Identification --- p.70 / Chapter 9 --- Motion Parameter Extraction from Overlapped Object Images --- p.72 / Chapter 9.1 --- Introduction --- p.72 / Chapter 9.2 --- Decision Network --- p.72 / Chapter 9.3 --- Motion Direction Extraction from Overlapped Object Images by Three-Level Network Model with Supervised Learning --- p.75 / Chapter 9.4 --- Readjustment Network for Motion Parameter Extraction from Overlapped Object Images --- p.79 / Chapter 9.5 --- Reconstruction of the Overlapped object Image --- p.82 / Chapter 10 --- The Integrated Motion Detection System --- p.87 / Chapter 10.1 --- Introduction --- p.87 / Chapter 10.2 --- System Architecture --- p.88 / Chapter 10.3 --- Results and Concluding Remarks --- p.91 / Chapter 11 --- Conclusion --- p.93 / References --- p.97
|
94 |
Human dynamic orientation model applied to motion simulationBorah, Joshua January 1976 (has links)
Thesis. 1976. M.S.--Massachusetts Institute of Technology. Dept. of Aeronautics and Astronautics. / Bibliography: p.R1-R5. / by Joshua D. Borah. / M.S.
|
95 |
Experimental Study of Rocking Motion of Rigid Bodies on Deformable Medium via Monocular VideogrammetryGreenbaum, Raphael January 2014 (has links)
The study of rigid body rocking is applicable to a wide variety of structural and non-structural elements. The current applications range from bridge pier and shallow footing design to hospital and industrial equipment, even art preservation. Despite the increasing number of theoretical and simulation studies of rocking motion, few experimental studies exist. Of those that have been published, most are focused on a constrained version of the complete problem introducing modifications to the physical problem with the purpose of eliminating either sliding, uplift or the three dimensional response of the body. However, all of these phenomena may affect the response of an unrestrained rocking body. Furthermore, the majority of the experimental studies that have been published have used methods that are ill-suited to comprehensive three dimensional experimental analysis of the problem.
The intent of this work is two-fold. First, to present a computer vision method that allows for the experimental measurement of the rigid body translation and rotation time histories in three dimensions. Experimental results obtained with this method will be presented to demonstrate that it obtains greater than 97% accuracy when compared against National Institute of Standards and Technology traceable displacement sensors. The experimental results highlight important phenomena predicted in some state-of-the-art models for 3D rocking behavior. Second, to present experimental evidence of the importance of characterizing the support medium as deformable instead of the commonly assumed rigid model. It will be shown in this work that this assumption of a rigid support may in some cases lead to non-conservative analysis that is unable to predict rocking motion and, in some cases, even failure.
|
96 |
Using Visual Illusions to Examine Action-Related Perceptual ChangesVuorre, Matti January 2018 (has links)
Action has many influences on how and what we perceive. One robust example of the relationship between action and subsequent perception, which has recently received great attention in the cognitive sciences, is the “intentional binding” effect: When people estimate the timing of their actions and those actions’ effects, they judge the actions and effects as having occurred closer together in time than two events that do not involve voluntary action (Haggard, Clark, & Kalogeras, 2002). This dissertation examines the possible mechanisms and consequences of the intentional binding effect. First, in Chapter 1, I discuss previous literature on the relationships between experiences of time, action, and causality. Impressions of time and causality are psychologically related: The perceived timing of events impacts, and is impacted by, perceived causality. Similarly, one’s experience of causing and controlling events with voluntary action, sometimes called the sense of agency, shapes and is shaped by how those events’ timing is perceived—as shown by the intentional binding effect.
In Chapter 2 I present a series of experiments investigating a hypothesized mechanism underlying the intentional binding effect: Actions may lead to a slowing of subjective time, which would explain the intentional binding effect by postulating a shorter experienced duration between action and effect. This hypothesis predicts that, following action, durations separating any two stimuli would appear subjectively shorter. We tested this hypothesis in the context of visual motion illusions: Two visual stimuli are presented in short succession and if the duration between the stimuli (inter-stimulus interval; ISI) is short, participants tend to perceive motion such that the first stimulus appears to move to the position of the second stimulus. If actions shorten subjective durations, even in visual perception, people should observe motion at longer ISIs when the stimuli follow voluntary action because the two stimuli would be separated by less subjective time. Three experiments confirmed this prediction. An additional experiment showed that verbal estimates of the ISI are also shorter following action. A control experiment suggested that a shift in the ability to prepare for the stimuli, afforded by the participant initiating the stimuli, is an unlikely alternative explanation of the observed results. In Chapter 3 I further investigate whether temporal contiguity of actions and their effects, which is known to impact intentional binding, affects perceptions of visual motion illusions. Two experiments showed that temporal contiguity modulates perceptions of illusory motion in a manner similar to contiguity’s effect on intentional binding.
Together, these results show that actions impact perception of visual motion illusions and suggest that general slowing of subjective time is a plausible mechanism underlying the intentional binding effect.
|
97 |
Human expression and intention via motion analysis: learning, recognition and system implementation. / CUHK electronic theses & dissertations collection / Digital dissertation consortiumJanuary 2004 (has links)
by Ka Keung Caramon Lee. / "March 29, 2004." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (p. 188-210). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
|
98 |
Circularvection and ocular counterrolling in visually induced roll - supine and in weightlessnessCrites, Troy A January 1980 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1980. / Microfiche copy available in Archives and Barker. / Bibliography: leaves 193-197. / by Troy A. Crites. / M.S.
|
99 |
Segmentation based variational model for accurate optical flow estimation.January 2009 (has links)
Chen, Jianing. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 47-54). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Related Work --- p.3 / Chapter 1.3 --- Thesis Organization --- p.5 / Chapter 2 --- Review on Optical Flow Estimation --- p.6 / Chapter 2.1 --- Variational Model --- p.6 / Chapter 2.1.1 --- Basic Assumptions and Constraints --- p.6 / Chapter 2.1.2 --- More General Energy Functional --- p.9 / Chapter 2.2 --- Discontinuity Preserving Techniques --- p.9 / Chapter 2.2.1 --- Data Term Robustification --- p.10 / Chapter 2.2.2 --- Diffusion Based Regularization --- p.11 / Chapter 2.2.3 --- Segmentation --- p.15 / Chapter 2.3 --- Chapter Summary --- p.15 / Chapter 3 --- Segmentation Based Optical Flow Estimation --- p.17 / Chapter 3.1 --- Initial Flow --- p.17 / Chapter 3.2 --- Color-Motion Segmentation --- p.19 / Chapter 3.3 --- Parametric Flow Estimating Incorporating Segmentation --- p.21 / Chapter 3.4 --- Confidence Map Construction --- p.24 / Chapter 3.4.1 --- Occlusion detection --- p.24 / Chapter 3.4.2 --- Pixel-wise motion coherence --- p.24 / Chapter 3.4.3 --- Segment-wise model confidence --- p.26 / Chapter 3.5 --- Final Combined Variational Model --- p.28 / Chapter 3.6 --- Chapter Summary --- p.28 / Chapter 4 --- Experiment Results --- p.30 / Chapter 4.1 --- Quantitative Evaluation --- p.30 / Chapter 4.2 --- Warping Results --- p.34 / Chapter 4.3 --- Chapter Summary --- p.35 / Chapter 5 --- Application - Single Image Animation --- p.37 / Chapter 5.1 --- Introduction --- p.37 / Chapter 5.2 --- Approach --- p.38 / Chapter 5.2.1 --- Pre-Process Stage --- p.39 / Chapter 5.2.2 --- Coordinate Transform --- p.39 / Chapter 5.2.3 --- Motion Field Transfer --- p.41 / Chapter 5.2.4 --- Motion Editing and Apply --- p.41 / Chapter 5.2.5 --- Gradient-domain composition --- p.42 / Chapter 5.3 --- Experiments --- p.43 / Chapter 5.3.1 --- Active Motion Transfer --- p.43 / Chapter 5.3.2 --- Animate Stationary Temporal Dynamics --- p.44 / Chapter 5.4 --- Chapter Summary --- p.45 / Chapter 6 --- Conclusion --- p.46 / Bibliography --- p.47
|
100 |
IVEE : interesting video event extraction /Paskali, Jeremy C. January 2006 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2006. / Typescript. Includes bibliographical references (leaves 136-138).
|
Page generated in 0.1798 seconds