• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 43
  • 18
  • 17
  • 17
  • 16
  • 6
  • 6
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 420
  • 420
  • 189
  • 148
  • 98
  • 92
  • 85
  • 66
  • 59
  • 45
  • 45
  • 43
  • 39
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Humanizing robots? The influence of appearance and status on social perceptions of robots

Mays, Kate Keener 14 January 2021 (has links)
Social robots are a lesser known technology with uncertain but seemingly very powerful potential, which for decades has been portrayed in cultural artifacts as threats to human primacy. Research on people’s relationships to non-robotic technology, however, indicates that people will treat robots socially and assimilate them into their lives in ways that may disrupt existing norms but still fulfill a fundamental human need. Through the theoretical lenses of media equation and apparatgiest, this dissertation examines facets of robot humanization, defined as how people think of robots as social and human-like entities through perceptions of liking, human-likeness, and rights’ entitlement. In a 2 (gender) x 2 (physical humanness) x 3 (status) between-subjects online experiment, this dissertation explores the influence of fixed technological traits (the robot’s gender, physical humanness, and described status) and participants’ individual differences on humanization perceptions. Findings show that the robots’ features mattered less than participants’ individual traits, which explained the most variance in humanizing perceptions of social robots. Of those, participants’ prior robot exposure (both in real life and mediated) and efficacy traits were the strongest predictors of robot liking, perceived human-likeness, and perceptions of rights entitlement. Specifically, those with more real-life exposure and who perceived themselves as more technologically competent were more likely to humanize robots, while those with higher internal loci of control and negative mediated views of robots were less inclined to humanize robots. Theoretically, this study’s findings suggest that technological affordances matter less than the ontological understanding that social robots as a category may have in people’s humanizing perceptions. Looking forward, these findings indicate that there is an opportunity in the design of social robots to set precedents now that are prosocial and reflective of the world people strive for and want to inhabit in the future.
92

Assistive Navigation Technology for Visually Impaired Individuals

Norouzi Kandalan, Roya 08 1900 (has links)
Sight is essential in our daily tasks. Compensatory senses have been used for centuries by visually impaired individuals to navigate independently. The help of technology can minimize some challenges for visually impaired individuals. Assistive navigation technologies facilitate the pathfinding and tracing in indoor scenarios. Different modules are added to assistive navigation technologies to warn about the obstacles not only on the ground but about hanging objects. In this work, we attempt to explore new methods to assist visually impaired individuals in navigating independently in an indoor scenario. We employed a location estimation algorithm based on the fingerprinting method to estimate the initial location of the user. We mitigate the error of estimation with particle filter. The shortest path has been calculated with an A* algorithm. To provide the user with an accident-free experiment, we employed an obstacle avoidance algorithm capable of warning the users about the potential hazards. Finally, to provide an effective means of communication with the user, we employed text-to-speech and speech recognition algorithms. The main contribution of this work is to glue these modules together efficiently and affordably.
93

Improving Accessibility of Fully Automated Driving Systems for Blind and Low Vision Riders

Bloomquist, Eric Tait 08 August 2023 (has links)
For people who are blind or have low vision (BLV), physical barriers and negative experiences related to using current transportation options can have negative impacts on quality of life. The emergence of levels 4 – 5 automated driving system-dedicated vehicles (L4+ ADS), which will not require human operators to provide any input into the dynamic driving task, could empower the BLV community by providing an independent means of transportation. Yet, the BLV community has concerns that their needs are not being adequately considered by those currently developing L4+ ADSs, which will result in this technology being inaccessible to populations that it would otherwise greatly benefit. The current study sought to address this gap in the literature by explicitly evaluating the information and interactions that BLV riders will require from L4+ ADS. Specifically, we collected focus group and empirical data across three studies on BLV riders' information and interaction requirements for L4+ ADSs across expected and unexpected driving scenarios as well as pick-up and drop-off tasks (PUDO). Through focus groups with sighted (n = 11) and BLV participants (n = 11; Study 1), we identified similarities and differences between sighted and BLV participants in terms of their user needs for L4+ ADSs across five challenging driving scenarios. Next, we examined BLV participants' (n = 13; Study 2) information requests in real-world settings to better understand BLV riders' needs during a simulated L4+ ADS experience. Our findings show that BLV riders want information that helps with (a) orienting to important objects in the environment during PUDO, (b) determining their location while riding in the ADS, and (c) understanding the ADSs' actions. Finally, we developed an HMI prototype using BLV riders' feedback in Studies 1 and 2 and had BLV participants engage with it during a simulated L4+ ADS trip (n = 12; Study 3). Our results suggest that BLV riders value information about nearby landmarks in familiar and unfamiliar areas, as well as explanations for ADS's actions during ordinary and unexpected scenarios. Additionally, BLV riders need information about required walking distances and presence of tripping hazards in order to select a drop-off location. Taken together, our studies show that BLV riders have specific requirements that L4+ ADS must meet in order for this to be an accessible means of transportation. In light of these findings, we generated 28 guidelines and 44 recommendations that could be used by designers to improve the accessibility of L4+ ADSs for BLV riders. / Doctor of Philosophy / When using current transportation options, individuals who are blind or have low vision (BLV) often encounter physical barriers and negative experiences, which can limit their ability to travel independently and have negative impacts on their overall quality of life. However, future vehicles equipped with levels 4 – 5 automated driving systems (L4+ ADSs) will offer transportation that requires no input from human operators, and thus, could be used as an independent means of transportation for the BLV community. Unfortunately, the BLV community has concerns that their needs are not being adequately considered by those currently developing L4+ ADSs, which will result in this technology being inaccessible to populations that it would otherwise greatly benefit. The current work sought to address this gap in the literature by evaluating the information and interactions that BLV riders will require from L4+ ADS. We conducted three studies to collected data on BLV riders' information and interaction requirements for L4+ ADSs across a variety of driving scenarios as well as tasks relating to being picked up and dropped-off by an L4+ ADS. First, through focus groups with sighted and BLV participants, we identified similarities and differences between sighted and BLV participants' user needs for L4+ ADSs across five challenging driving scenarios. Next, to better understand BLV riders' needs, we had BLV participants indicate when they would desire information during a simulated L4+ ADS ride-hailing experience in real-world settings. Our findings show that BLV riders want information that helps with (a) orienting to important objects in the environment during PUDO, (b) determining their location during their trip, and (c) understanding the reason for the ADS's actions. Finally, using BLV riders' feedback, we developed an HMI prototype and had BLV participants engage with it during a simulated L4+ ADS trip. Our results suggest that BLV riders value information about nearby landmarks in both familiar and unfamiliar areas, as well as explanations for ADS's actions during common (e.g., stopping at a stop sign) and unexpected driving scenarios (e.g., sudden swerve). Additionally, when being dropped off, BLV riders need information about required walking distances and presence of tripping hazards in order to select a desirable drop-off location. Taken together, our studies show that BLV riders have specific requirements that L4+ ADS must meet in order for this to be an accessible means of transportation. In light of these findings, we generated a set of guidelines and recommendations that designers can use to improve the accessibility of L4+ ADSs for BLV riders.
94

An AI-based collaborative Robot System for Technical Education

Schubert, Tobias, Heßlinger, Sebastian, Dwarnicak, Alexander 12 February 2024 (has links)
In this paper a cobot system is presented, that extends a Universal Robot with Artificial Intelligence (i.e., machine learning techniques) to allow for a safe human-robot collaboration, which is one of the main technologies in Industry 4.0 and is currently significantly changing the shop floor of manufacturing companies. Typically, these cobots are equipped with a camera to dynamically adapt to new situations and actions carried out by the worker who is collaborating with the robot in the same workspace. But obviously, switching from traditional industrial robots (acting completely isolated from humans) to smart robots also requires a change concerning the skills and knowledge workers must have to be able to control, manage, and interact with such cobot systems. Therefore, the main goal of this demonstrator is to develop a hard- and software environment, enabling a variety of different training scenarios to get trainees, employees, and students familiar with the main technical aspects of such human-robot interaction. Besides hardware and software related aspects, the paper will also briefly address the learning content, which is on the one hand, the basics of robotics and machine learning based image processing, and on the other hand, the interaction of the various components to form a functional overall system.
95

The computational face for facial emotion analysis: Computer based emotion analysis from the face

Al-dahoud, Ahmad January 2018 (has links)
Facial expressions are considered to be the most revealing way of understanding the human psychological state during face-to-face communication. It is believed that a more natural interaction between humans and machines can be undertaken through the detailed understanding of the different facial expressions which imitate the manner by which humans communicate with each other. In this research, we study the different aspects of facial emotion detection, analysis and investigate possible hidden identity clues within the facial expressions. We study a deeper aspect of facial expressions whereby we try to identify gender and human identity - which can be considered as a form of emotional biometric - using only the dynamic characteristics of the smile expressions. Further, we present a statistical model for analysing the relationship between facial features and Duchenne (real) and non-Duchenne (posed) smiles. Thus, we identify that the expressions in the eyes contain discriminating features between Duchenne and non-Duchenne smiles. Our results indicate that facial expressions can be identified through facial movement analysis models where we get an accuracy rate of 86% for classifying the six universal facial expressions and 94% for classifying the common 18 facial action units. Further, we successfully identify the gender using only the dynamic characteristics of the smile expression whereby we obtain an 86% classification rate. Likewise, we present a framework to study the possibility of using the smile as a biometric whereby we show that the human smile is unique and stable. / Al-Zaytoonah University
96

Autonomous Vehicle & Pedestrian Interaction

Uji, Terkuma January 2022 (has links)
This degree project investigates social and technological aspects of human-vehicle interactions with regards to driverless autonomous utilitarian vehicles in urban context and proposes the use of LED lighting as an external Human Machine Interface for vehicle pedestrian signaling.
97

The development of low level communication interfaces for generic work cell control

Ridgway, Angela Nadine 10 October 2009 (has links)
As the desire for factory automation increases, so has the need to integrate machinery within the factory. More specifically, this integration is gaining importance in the area of manufacturing work cells. Many ideas exist about what functions a cell controller should perform and how it should interact with its environment. The functions utilized by the cell controller many vary depending on the type of machinery but similar tasks are usually performed. The complexity of the cell controller increases due to the differences in functional capability caused by machine intelligence or vendors specifications. The objective of this research was to create a framework which can be followed when developing low level machine specific cell control communications. The framework would assist the user in defining and structuring the information and functions associated with a particular device and operating environment. This framework will act as a guide in the creation of generic cell control communication functions. The purpose of the framework is to act as a guide in the development of low level base routines which interface to various classes of factory devices. It is impossible to create a completely generic base which will interact with every device. However, it is possible to develop this base following a structured format which facilitates generic work cell control. / Master of Science
98

Isometric forces transmitted by the digits: data collection using a standardized protocol

Williams, Vicki Higginbotham January 1988 (has links)
Data collection on isometric forces exerted by means of the digits, is a virtually untapped research area. However, such data would prove particularly useful in areas such as hand-tool and control design, and also in medical evaluation. A standardized protocol is necessary if a sound, useful data base is to be built. This study developed such a protocol and data were collected using the defined protocol. The study also showed that occupational level (defined by tools and controls used) and gender both had significant effects on certain strength exertions of the digits. Therefore the appropriate data must be collected, depending on the intended use and user population. Regression equations were produced which predicted the strength exertions using anthropometric measurements which are commonly available. Although some particular exertions were not well predicted, the potential of prediction was verified. / Master of Science
99

Interactively Guiding Semi-Supervised Clustering via Attribute-based Explanations

Lad, Shrenik 01 July 2015 (has links)
Unsupervised image clustering is a challenging and often ill-posed problem. Existing image descriptors fail to capture the clustering criterion well, and more importantly, the criterion itself may depend on (unknown) user preferences. Semi-supervised approaches such as distance metric learning and constrained clustering thus leverage user-provided annotations indicating which pairs of images belong to the same cluster (must-link) and which ones do not (cannot-link). These approaches require many such constraints before achieving good clustering performance because each constraint only provides weak cues about the desired clustering. In this work, we propose to use image attributes as a modality for the user to provide more informative cues. In particular, the clustering algorithm iteratively and actively queries a user with an image pair. Instead of the user simply providing a must-link/cannot-link constraint for the pair, the user also provides an attribute-based reasoning e.g. "these two images are similar because both are natural and have still water'' or "these two people are dissimilar because one is way older than the other''. Under the guidance of this explanation, and equipped with attribute predictors, many additional constraints are automatically generated. We demonstrate the effectiveness of our approach by incorporating the proposed attribute-based explanations in three standard semi-supervised clustering algorithms: Constrained K-Means, MPCK-Means, and Spectral Clustering, on three domains: scenes, shoes, and faces, using both binary and relative attributes. / Master of Science
100

Simulation Studies and Benchmarking of Synthetic Voice Assistant Based Human-Machine Teams (HMT)

Damacharla, Praveen Lakshmi Venkata Naga January 2018 (has links)
No description available.

Page generated in 0.0885 seconds