• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A comprehensive study of referring expressions in ASL

Czubek, Todd Alan 18 March 2018 (has links)
Substantial research has examined how linguistic structures are realized in the visual/spatial modality. However, we know less about linguistic pragmatics in signed languages, particularly the functioning of referring expressions (REs). Recent research has explored how REs are deployed in signed languages, but much remains to be learned. Study 1 explores the inventory and workings of REs in American Sign Language by seeking to replicate and build upon Frederiksen & Mayberry (2016). Following Ariel, F&M propose an inventory of REs in ASL ranked according to the typical accessibility of the referents each RE type signals. Study 1 reproduced their results using more complex narratives and including a wider range of REs in various syntactic roles. Using Toole’s (1997) accessibility rating protocol, we calculated average accessibility ratings for each RE type, thus making possible statistical analyses that show more precisely which REs differ significantly in average accessibility. Further, several RE types that F&M had collapsed are shown to be distinct. Finally, we find general similarities between allocations of REs in ASL and in spoken English, based on 6 matched narratives produced by native English speakers. Study 2 explores a previously unexamined set of questions about concurrently occurring REs: collections of REs produced simultaneously. It compares isolated REs that occur in a linear fashion, similar to spoken language grammars, with co-occurring REs, signaling multiple referents simultaneously (termed here constellations). This study asks whether REs in constellations have pragmatic properties different from those of isolated/linear REs. Statistical evidence is presented that some categories of REs do differ significantly in the average accessibility values of their referents, when compared across linear versus concurrent configurations. Study 3 examines whether the proportions of various RE categories used by native ASL signers vary according to the recipient’s familiarity with the narrative. Do ASL narratives designed to be maximally explicit because of low recipient familiarity demonstrate distinct RE allocations? In this sample of 34 narratives, there is no statistically significant difference in RE use attributable to recipient familiarity. These findings have important implications for understanding the impact of modality on accessibility, the use of REs in ASL, and visual processing.
2

Noun phrase generation for situated dialogs

Stoia, Laura Cristina 10 December 2007 (has links)
No description available.
3

Advances in Visibility Modelling in Urban Environments to Support Location Based Services

Bartie, Philip James January 2011 (has links)
People describe and explore space with a strong emphasis on the visual senses, yet modelling the field of view has received little attention within the realm of Location Based Services (LBS), in part due to the lack of useful data. Advances in data capture, such as Light Detection and Ranging (LiDAR), provide new opportunities to build digital city models and expand the range of applications which use visibility analysis. This thesis capitalises on these advances with the development of a visibility model to support a number of innovative LBS functions in an urban region and particular focus is given to the visibility model‟s supporting role in the formation of referring expressions, the descriptive phrases used to identify objects in a scene, which are relevant when delivering spatial information to the user through a speech based interface. Speech interfaces are particularly useful to mobile users with restricted screen viewing opportunities, such as navigational support for motorists and a wider range of tasks including delivering information to urban pedestrians. As speech recognition accuracies improve so new interaction opportunities will allow users to relate to their surroundings and retrieve information on buildings in view through spoken descriptions. The papers presented in this thesis work towards this goal, by translating spatial information into a form which matches the user‟s perspective and can be delivered over a speech interface. The foundation is the development of a new visual exposure model for use in urban areas, able to calculate a number of metrics about Features of Interest (FOIs), including the façade area visible and the percentage on the skyline. The impact of urban vegetation as a semi-permeable visual barrier is also considered, and how visual exposure calculations may be adjusted to accommodate under canopy and through canopy views. The model may be used by pedestrian LBSs, or applied to vehicle navigation tasks to determine how much of a route ahead is in view for a car driver, identifying the sections with limited visibility or the best places for an overtaking manoeuvre. Delivering information via a speech interface requires FOI positions to be defined according to projective space relating to the user‟s viewpoint, rather than topological or metric space, and this is handled using a new egocentric model. Finally descriptions of the FOIs are considered, including a method to automatically collect façade colours by excluding foreground objects, and a model to determine the most appropriate description to direct the LBS user‟s attention to a FOI in view.

Page generated in 0.0817 seconds