• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 62
  • 62
  • 27
  • 13
  • 10
  • 9
  • 9
  • 9
  • 9
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An analysis of global shape processing using radial frequency contours

Bell, Jason January 2008 (has links)
Encoding the shape of objects within the visual environment is one of the important roles of the visual system. This thesis investigates the proposition that human sensitivity to a broad range of closed-contour shapes is underpinned by multiple shape channels (Loffler, Wilson, & Wilkinson, 2003). Radial frequency (RF) contours are a novel type of stimulus that can be used to represent simple and complex shapes; they are created by sinusoidally modulating the radius of a circle, where the number of cycles of modulation defines the RF number (Wilkinson, Wilson, & Habak, 1998). This thesis uses RF contours to enhance our understanding of the visual processes which support shape perception. The first part of the thesis combines low and high RF components, which Loffler et al. have suggested are detected by separate global and local processes respectively, onto a single contour and shows that, even when combined, the components are detected independently at threshold. The second part of the thesis combines low RF components from across the range where global processing has been demonstrated (up to approximately RF10) onto a single contour in order to test for interactions between them. The resulting data reveal that multiple narrow-band contour shape channels are required to account for performance, and also indicate that these shape channels have inhibitory connections between them. The third part of the thesis examines the local characteristics which are used to represent shape information within these channels. The results show that both the breadth (polar angle subtended) of individual curvature features, and their relative angular positions (in relation to object centre) are important for representing RF shapes; however, processing is IV not tuned for object size, or for modulation amplitude. In addition, we show that luminance and contrast cues are effectively combined at the level where these patterns are detected, indicating a single later processing stage is adequate to explain performance for these pattern characteristics. Overall the findings show that narrow-band shape channels are a useful way to explain sensitivity to a broad range of closed-contour shapes. Modifications to the current RF detection model (Poirier & Wilson, 2006) are required to incorporate inhibitory connections between shape channels and also, to accommodate the effective integration of luminance and contrast cues.
42

Perceptual Organization in Vision: Emergent Features in Two-Line Space

January 2011 (has links)
What exactly are the "parts" that make up the whole object, and how and when do they group? The answer that is proposed hinges on Emergent Features: features that materialize from the configuration which make the object more discriminable from other objects. EFs are not possessed by any individual part and are processed as or more quickly than are the properties of the parts. The present experiments focus on visual discrimination of two-line configurations in an odd-quadrant task. RT data were obtained and compared with a prediction based on the number of EF differences in the odd quadrant (the higher the number of EF differences, the faster the discrimination was predicted). The results suggest that the EFs most responsible for the variations in RT might be lateral endpoint offset, intersections, parallelism, connectivity, number of terminators, and pixel count. Future directions include investigating the individual contributions and salience of EFs.
43

Color discrimination of small targets /

Highnote, Susan M. January 2003 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2003. / Vita. Includes bibliographical references (leaves 371-389).
44

Copy and recall of the Rey Complex figure before and after unilateral frontal- or temporal-lobe excision / Copy and recall of the Rey Complex figure before and after unilateral frontal- or temporal-lobe excision.

Caramanos, Zografos January 1993 (has links)
Copy and recall drawings of the Rey Complex Figure obtained during the standard clinical testing of patients with well-localized epileptic foci before and after left frontal-, left temporal-, right temporal-lobe resection were re-scored blind as to lesion site using standard protocol (18 elements scored 0, 1/2, 1, or 2 based on whether they are drawn and placed correctly for a total out of 36). They were also scored for which, and how many, elements were missing, distorted, displaced, and/or repeated. Contrary to previous findings, no main effects of side or lobe or side-by-lobe interactions were found on copy and recall scores obtained either before or after surgery, and all patients' recall improved equally from pre-operative to follow-up testing. Furthermore, patients' lesion site could not be predicted on the basis of any single measure or across all measures of performance. While group differences had been found on the previously assigned scores, the between-group overlap was almost complete and the original scoring was not done blindly. These results suggest that, despite previous claims, the Rey Complex Figure, a widely-used measure of non-verbal memory, is not an effective tool for localizing neural disturbance in temporal- and frontal-lobe epilepsy patients.
45

Human frontal eye fields and visual search

O'Shea, Jacinta January 2005 (has links)
This thesis tested whether the human frontal eye fields (FEFs) have visuospatial functions that are dissociable from FEF oculomotor functions. Functional magnetic resonance imaging (fMRI) was used to localize the FEFs, and transcranial magnetic stimulation (TMS) was applied in a series of experiments to transiently disrupt information processing in the FEFs. It was shown that TMS applied over the right FEFs degrades subjects' performance on a visual conjunction search task in which eye movements were not required and were not made. A TMS timing protocol subsequently showed that computations in the FEFs that occur between 40 and 80ms after the onset of a visual search array are critical for accurate performance. This suggests that, as in the monkey, the human FEFs may accumulate and use visual evidence from extrastriate cortex, which forms the basis for accurate visuospatial discrimination. A training protocol showed that the right FEFs are no longer critical for accurate visuospatial discrimination performance once a search task has been extensively practised. This study further suggested that the FEFs may have a previously unknown role in the perception of left-right rotated shapes. A study on feature and spatial priming indicated that these two phenomena have distinct causal mechanisms. The left FEFs appear to access a spatial memory signal during the process of saccade programming. When TMS is applied during this period, the spatial priming benefit is abolished. Altogether, this thesis presents evidence that visuospatial and oculomotor functions can be dissociated in the human FEFs. The data on timing and the effects of learning correspond well with results reported in monkeys. The priming experiment offers the first evidence that the left FEFs are crucial for spatial priming, while the learning study suggests the novel hypothesis that the FEFs are crucial for left-right rotated shape perception.
46

Pencils & Erasers: Interactions between motion and spatial coding in human vision

Thomas Wallis Unknown Date (has links)
Visual information about the form of an object and its movement in the world can be processed independently. These processing streams must be combined, since our visual experience is of a unitary stream of information. Studies of interactions between motion and form processing can therefore provide insight into how this combination occurs. The present thesis explored two such interactions between motion and spatial coding in human vision. The title of the thesis, “Pencils and Erasers”, serves as an analogy for the thesis’ principal findings. I investigate one situation in which moving patterns can impair the visibility of stationary forms, and another in which the visibility of form is enhanced by motion. In motion-induced blindness (MIB; Bonneh, Cooperman, & Sagi, 2001), salient stationary objects can seem to disappear intermittently from awareness when surrounded by moving features. Static forms proximate to motion can be “erased” from awareness. The thesis contributes to the answer to a simple question: why does MIB happen? My interpretation of this phenomenon emphasises the possible functional benefit of such an eraser around moving form: to suppress artifacts of visual processing from awareness. Chapter 2 demonstrates that motion per se is not required for MIB (Wallis & Arnold, 2008). MIB depends on the rate of luminance change over time, rather than the velocity (or change in position) of the inducing mask. MIB can therefore be characterised as a temporal inhibition, which does not critically depend on direction selective (motion) mechanisms. A similar mechanism of temporal inhibition that does not depend on motion is that which suppresses motion streaks from awareness. The human visual system integrates information over time. Consequently, moving image features produce smeared signals, or “motion streaks”, much like photographing a moving object using a slow shutter speed. We do not experience motion streaks as much as might be expected as they are suppressed from awareness in most circumstances. Evidence suggests that this suppression is enacted by a process of local temporal inhibition, and does not depend on motion mechanisms – much like MIB. These similarities led us to propose that MIB and motion streak suppression might reflect the same mechanism. In the case of MIB, physically present static targets may not be differentiated from signals arising from within the visual system, such as a motion streak. Chapter 3 of the thesis presents four converging lines of evidence in support of this hypothesis (Wallis & Arnold, 2009). The link between MIB and a mechanism of temporal inhibition that serves to suppress motion streaks is further strengthened by a recent report from our laboratory of a new visual illusion, Spatio-Temporal Rivalry (STR; Arnold, Erskine, Roseboom, & Wallis, in press), that is included in the present thesis as an appendix. Why does MIB occur? I suggest that at its base level, MIB reflects the activity of this simple visual mechanism of temporal inhibition (see Gorea & Caetta, 2009). This mechanism might usually serve a functional role in everyday vision: for example, by suppressing the perception of motion streaks. The second motion and form interaction investigated in the thesis represents a situation in which motion can improve form sensitivity. In some situations, observing a moving pattern can objectively improve sensitivity to that pattern after the offset of motion. The visual system can “pencil in”, or improve the visibility of, subsequent visual input. When a form defined by its motion relative to the background ceases to move, it does not seem to instantly disappear. Instead, the form is perceived to remain segregated from the background for a short period, before slowly fading. It is possible that this percept represents a consequence of bias or expectation, not a modulation of static form visibility by motion. Contrary to this possibility, Wallis, Williams and Arnold (2009) demonstrate that alignment sensitivity to spatial forms is improved by pre-exposure to moving forms (Chapter 4). I suggest that the subjective persistence of forms after motion offset and this spatial facilitation may represent two consequences of the same signal. The experiments herein address one situation in which moving patterns can impair the visibility of stationary forms and one in which moving patterns enhance the visibility of stationary forms. Therefore, the present thesis characterises two interactions between form and motion processing in human vision. These mechanisms of “pencil” and “eraser” facilitate the clear perception of objects in our visual world.
47

The recovery of 3-D structure using visual texture patterns

Loh, Angeline M. January 2006 (has links)
[Truncated abstract] One common task in Computer Vision is the estimation of three-dimensional surface shape from two-dimensional images. This task is important as a precursor to higher level tasks such as object recognition - since shape of an object gives clues to what the object is - and object modelling for graphics. Many visual cues have been suggested in the literature to provide shape information, including the shading of an object, its occluding contours (the outline of the object that slants away from the viewer) and its appearance from two or more views. If the image exhibits a significant amount of texture, then this too may be used as a shape cue. Here, ‘texture’ is taken to mean the pattern on the surface of the object, such as the dots on a pear, or the tartan pattern on a tablecloth. This problem of estimating the shape of an object based on its texture is referred to as shape-form-texture and it is the subject of this thesis . . . The work in this thesis is likely to impact in a number of ways. The second shape-form-texture algorithm provides one of the most general solutions to the problem. On the other hand, if the assumptions of the first shape-form-texture algorithm are met, this algorithm provides an extremely usable method, in that users should be able to input images of textured objects and click on the frontal texture to quickly reconstruct a fairly good estimation of the surface. And lastly, the algorithm for estimating the transformation between textures can be used as a part of many shape-form-texture algorithms, as well as being useful in other areas of Computer Vision. This thesis gives two examples of other applications for the method: re-texturing an object and placing objects in a scene.
48

An investigation of visuospatial orientation and mental rotation in patients with Alzheimer's disease and patients with Huntington's disease /

Lineweaver, Tara T. January 1999 (has links)
Thesis (Ph. D.)--University of California, San Diego and San Diego State University, 1999. / Vita. Includes bibliographical references (leaves 105-113).
49

A structural skeleton based shape indexing approach for vector images

Song, Mingkui. January 2009 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Computer Science, 2009. / Includes bibliographical references.
50

Force plus graphics is not equal to vision plus haptics towards usable haptic environments /

Kirkpatrick, Arthur Edward. January 2000 (has links) (PDF)
Thesis (Ph. D.)--University of Oregon, 2000. / Title from title page. Extent of document: xiii, 207 p. : ill. Includes vita and abstract. Includes bibliographical references (p. 197-207).

Page generated in 0.0794 seconds