Spelling suggestions: "subject:"heat orientation"" "subject:"held orientation""
1 |
Robust dynamic orientation sensing using accelerometers : model-based methods for head tracking in AR : a thesis presented for the degree of Doctor of Philosophy in Mechanical Engineering at the University of Canterbury, Christchurch, New Zealand /Keir, Matthew Stuart. January 1900 (has links)
Thesis (Ph. D.)--University of Canterbury, 2008. / Typescript (photocopy). "24 September 2008." Includes bibliographical references (p. [137]-143). Also available via the World Wide Web.
|
2 |
Detection and intention prediction of pedestrians in zebra crossingsVarytimidis, Dimitrios January 2018 (has links)
Behavior of pedestrians who are moving or standing still close to the street could be one of the most significant indicators about pedestrian’s instant future actions. Being able to recognize the activity of a pedestrian, can reveal significant information about pedestrian’s crossing intentions. Thus, the scope of this thesisis to investigate ways and methods to improve understanding ofpedestrian´s activity and in particular detecting their motion and head orientation in relation to the surrounding traffic. Furthermore, different features and methods are examined, used and assessed according to their contribution on distinguishing between different actions. Feature extraction methods considered are Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP) and Convolutional Neural Networks (CNNs). The features are extracted by processing still images of pedestrians from the Joint Attention for Autonomous Driving (JAAD) dataset. The images are extracted from video frames depicting pedestrians walking next to the road or crossing the road are used. Based on the features, a number of Machine Learning (ML) techniques(CNN, Artificial Neural Networks, Support Vector Machines, K-Nearest Neighbor and Decision Trees) are used to predict the head orientation, motion as well as the intention of the pedestrian. The work is divided into three parts, the first is to combine feature extraction and ML to predict pedestrian’s action regarding if they are walking or not. The second is to identify the pedestrian's head orientation in terms of if he/she is looking at the vehicle or not, this is also done by combining feature extraction and ML. The final task is to combine these two measures in a ML-basedclassifier that is trained to predict the pedestrian´s crossing intention and action. In addition to the pedestrian’s behavior for estimating the crossing intention, additional features about the local environment were added as input signals for the classifier, for instance, information about the presence of zebra markings in the street, the location of the scene, and weather conditions.
|
3 |
Measuring the Differences Between Head and Gaze Orientation in Virtual Reality / Mätning av skillnaderna mellan huvud- och blick-orientering i virtuell verklighetQiu, Yuchen January 2017 (has links)
With the spread of virtual reality, eye tracker embedded VR headset gradually becomes a trend. A company such as Fove has already released its eye-tracking VR headset. However, the relatively low frame rate of eye tracker in VR HMD (e.g. 90 fps) makes tracking unstable with consumption of computing power. Understanding relations between gaze direction and head direction would be helpful, for example, to predict and compensate eye tracking with head tracking. In this research, a unity project consisted of a moving object with variable parameters was created to examine if there’s correlation exists between players’ head direction and gaze direction in eye’s smooth pursuit movement. Furthermore, object parameters, shape, color, distance, speed and horizontal moving degree were tested to explore whether they can elicit statistically significant differences in gaze prediction. Results revealed that while smoothly pursuing a moving object with the gaze, people’s horizontal and vertical component of head direction and gaze direction are separately linearly correlated. Moreover, formulas were calculated via linear regression to express their relations. As for object parameters, significant impacts were detected for all five parameters and interaction effect of speed and horizontal moving degree with various effect size, partial eta squared. / Med spridningen av den virtuella verkligheten blir Eye Tracking-inbyggd VR-headset gradvis en trend. Ett företag som Fove har redan släppt sitt Eye Tracking VR-headset. Emellertid gör den relativt låga bildhastigheten för ögonspårare i VR HMD (t.ex. 90 fps) spårning ostabil med förbrukning av datorkraft. Att förstå relationer mellan blickriktning och huvudriktning skulle vara till hjälp, till exempel för att förutsäga och kompensera ögonspårning med huvudspårning. I den här undersökningen, var ett Unity-projekt bestående av ett rörligt objekt med varierande parametrar skapad för att undersöka om det finns korrelation mellan spelarens huvudriktning och blickriktning i ögonens följerörelse. Dessutom testades objektparametrar; form, färg, avstånd, hastighet och horisontell rörelsegrad för att undersöka huruvida de kan framkalla statistiskt signifikanta skillnader i blickprediktionen. Resultaten avslöjade att medan man rör sig smidigt med ett rörligt föremål med blicken, är människornas horisontella och vertikala komponent i huvudriktning och blickriktning separat linjärt korrelerad. Dessutom beräknades formler via linjär regression för att uttrycka deras relationer. När det gäller objektparametrar detekterades signifikanta effekter för alla fem parametrarna och interaktionseffekten av hastighets-och horisontell rörelsegrad med olika effektstorlek, partiell Eta-kvadrat.
|
4 |
Evaluation of 2D and 3D Command Sources for Individuals with High TetraplegiaWilliams, Matthew R. 02 April 2009 (has links)
No description available.
|
5 |
Locally Tuned Nonlinear Manifold for Person Independent Head Pose EstimationFoytik, Jacob D. 22 August 2011 (has links)
No description available.
|
6 |
Design and Usability of a System for the Study of Head OrientationChen, Ji January 2010 (has links)
The ability to control head orientation relative to the body is a multi-sensory process that mainly depends on three sensory pathways namely, proprioceptive, vestibular, and visual. A system to study the sensory integration of head orientation was developed and tested. A test seat with five-point harness was assembled to provide the passive postural support. A light-weight head-mount display (HMD) was designed for mounting multi-axis accelerometers and a mini- CCD camera to provide the visual input to virtual reality (VR) goggles with 39° horizontal field of view. A digitally generated sinusoidal signal was delivered to a motor-driven computer-controlled sled on a 6m linear railing system. A data acquisition system was designed to collect acceleration data. A pilot study was conducted to test the system. Four young healthy subjects were seated with their trunks fixed to the seat. Subjects received a sinusoidal anterior-posterior translation with peak acceleration of 0.06g at 0.1Hz and 0.12g at 0.2Hz, 0.5Hz and 1.1Hz. Four sets of visual conditions were randomly presented along with the translation. These conditions included eyes open looking forward, backward, and sideways, and also eyes closed. Linear acceleration data were collected from linear accelerometers placed on the head, trunk and seat and were processed using Matlab. The head motion was analyzed using Fast Fourier Transform (FFT) to derive gain and phase of head pitch acceleration relative to seat linear acceleration. A randomization test for two independent variables was used to test significance of visual and inertial effects on response gain and phase shifts. Results show that the gain was close to one with no significant difference among visual conditions across frequencies. The phase was shown to be dependent on the head strategy each subject used. The ability to control head orientation relative to the body is a multi-sensory process that mainly depends on three sensory pathways namely, proprioceptive, vestibular, and visual. A system to study the sensory integration of head orientation was developed and tested. A test seat with five-point harness was assembled to provide the passive postural support. A light-weight head-mount display (HMD) was designed for mounting multi-axis accelerometers and a mini- CCD camera to provide the visual input to virtual reality (VR) goggles with 39° horizontal field of view. A digitally generated sinusoidal signal was delivered to a motor-driven computer-controlled sled on a 6m linear railing system. A data acquisition system was designed to collect acceleration data. A pilot study was conducted to test the system. Four young healthy subjects were seated with their trunks fixed to the seat. Subjects received a sinusoidal anterior-posterior translation with peak acceleration of 0.06g at 0.1Hz and 0.12g at 0.2Hz, 0.5Hz and 1.1Hz. Four sets of visual conditions were randomly presented along with the translation. These conditions included eyes open looking forward, backward, and sideways, and also eyes closed. Linear acceleration data were collected from linear accelerometers placed on the head, trunk and seat and were processed using Matlab. The head motion was analyzed using Fast Fourier Transform (FFT) to derive gain and phase of head pitch acceleration relative to seat linear acceleration. A randomization test for two independent variables was used to test significance of visual and inertial effects on response gain and phase shifts. Results show that the gain was close to one with no significant difference among visual conditions across frequencies. The phase was shown to be dependent on the head strategy each subject used. / Mechanical Engineering
|
7 |
Netiesinių daugdarų atpažinimo metodų taikymo web-kamera gautiems vaizdų rinkiniams analizuoti tyrimas / Analysis of non-linear manifold learning methods applied on image collections provided by webcamPetrauskas, Ignas 04 July 2014 (has links)
Šiame darbe nagrinėjami netiesiniai daugdarų atpažinimo metodai ir daugiamačių duomenų projekcijos metodai. Siūloma jais spręsti keliais laisvės laipsniais besisukančio objekto orientacijos radimo problemą. Aprašomi MDS, Trianguliacijos, Sammon, RPM, mRPM, CCA, PCA, LLE, LE, HLLE, LTSA, SMACOF ir Isomap metodai. Su kai kuriais iš jų atliekami web-kamera gautų galvos atvaizdų tyrimai. Isomap algoritmo pagrindu sukuriama programinė įranga ir su ja taipogi atliekami galvos orientacijos tyrimai. / This paper deals with Analysis of non-linear manifold learning methods and multidimensional data projection methods. It is proposed use them in solving problem of detection of orientation of object, moving in few degrees of freedom. Methods described: MDS, triangulation, Sammon, RPM, mRPM, CCA, PCA, LLE, LE, HLLE, LTSA, SMACOF and Isomap. Some of them are used to analyze head images acquired by webcam.. Application is created which is then used to analyze head orientation by implementing Isomap method.
|
8 |
Designing and combining mid-air interaction techniques in large display environmentsNancel, Mathieu 05 December 2012 (has links) (PDF)
Large display environments (LDEs) are interactive physical workspaces featuring one or more static large displays as well as rich interaction capabilities, and are meant to visualize and manipulate very large datasets. Research about mid-air interactions in such environments has emerged over the past decade, and a number of interaction techniques are now available for most elementary tasks such as pointing, navigating and command selection. However these techniques are often designed and evaluated separately on specific platforms and for specific use-cases or operationalizations, which makes it hard to choose, compare and combine them.In this dissertation I propose a framework and a set of guidelines for analyzing and combining the input and output channels available in LDEs. I analyze the characteristics of LDEs in terms of (1) visual output and how it affects usability and collaboration and (2) input channels and how to combine them in rich sets of mid-air interaction techniques. These analyses lead to four design requirements intended to ensure that a set of interaction techniques can be used (i) at a distance, (ii) together with other interaction techniques and (iii) when collaborating with other users. In accordance with these requirements, I designed and evaluated a set of mid-air interaction techniques for panning and zooming, for invoking commands while pointing and for performing difficult pointing tasks with limited input requirements. For the latter I also developed two methods, one for calibrating high-precision techniques with two levels of precision and one for tuning velocity-based transfer functions. Finally, I introduce two higher-level design considerations for combining interaction techniques in input-constrained environments. Designers should take into account (1) the trade-off between minimizing limb usage and performing actions in parallel that affects overall performance, and (2) the decision and adaptation costs incurred by changing the resolution function of a pointing technique during a pointing task.
|
9 |
User experience guidelines for design of virtual reality graphical user interfaces controlled by head orientation inputFröjdman, Sofia January 2016 (has links)
With the recent release of head-mounted displays for consumers, virtual reality experiences are more accessible than ever. However, there is still a shortage of research concerning how to design user interfaces in virtual reality for good experiences. This thesis focuses on what aspects should be considered when designing a graphical user interface in virtual reality - controlled by head orientation input - for a qualitative user experience. The research has included a heuristic evaluation, interviews, usability tests, and a survey. A virtual reality prototype of a video on demand service was investigated and served as the application for the research. Findings from the analysis of the data were application specific pragmatic and hedonic goals of the users, relevant to the subjective user experience, and current user experience problems with the prototype tested. In combination with previous recommendations, the result led to the development of seven guidelines. However, these guidelines are considered only to serve as a foundation for future research since they need to be validated. New head-mounted displays and virtual reality applications are released every day and with the increasing number of users, there will be a continuous need for more research.
|
10 |
Designing and combining mid-air interaction techniques in large display environments / Conception et combinaisons de techniques d'interaction mid-air dans les environnements à grands écransNancel, Mathieu 05 December 2012 (has links)
Les environnements à grands écrans (Large Display Environments, LDE) sont des espaces de travail interactifs contenant un ou plusieurs grands écrans fixes et divers dispositifs d'entrée ayant pour but de permettre la visualisation et la manipulation de très grands jeux de données. La recherche s'est de plus en plus intéressée à ces environnements durant ces dix dernières années, et il existe d'ores-et-déjà un certain nombre de techniques d'interaction correspondant à la plupart des tâches élémentaires comme le pointage, la navigation et la sélection de commandes. Cependant, ces techniques sont souvent conçues et évaluées séparément, dans des environnements et des cas d'utilisations spécifiques. Il est donc difficile de les comparer et de les combiner.Dans ce manuscrit, je propose un ensemble de guides pour l'analyse et la combinaison des canaux d'entrée et de sortie disponibles dans les LDEs. Je présente d'abord une étude de leurs caractéristiques selon deux axes: (1) le retour visuel, et la manière dont il affecte l'utilisabilité des techniques d'interaction et la collaboration co-localisée, et (2) les canaux d'entrée, et comment les combiner en d'efficaces ensembles de techniques d'interaction. Grâce à ces analyses, j'ai développé quatre pré-requis de conception destinés à assurer que des techniques d'interaction peuvent être utilisées (i) à distance, (ii) en même temps que d'autres techniques et (iii) avec d'autres utilisateurs. Suivant ces pré-requis, j'ai conçu et évalué un ensemble de techniques de navigation, d'invocation de commandes tout en pointant, et de pointage haute-précision avec des moyens d'entrée limités. J'ai également développé deux méthodes de calibration de techniques de pointage, l'une spécifique aux techniques ayant deux niveaux de précision et l'autre adaptée aux fonctions d'accélération. En conclusion, j'introduis deux considérations de plus haut niveau sur la combinaison de techniques d'interaction dans des environnements aux canaux d'entrée limités : (1) il existe un compromis entre le fait de minimiser l'utilisation des membres de l'utilisateur et celui d'effectuer des actions en parallèle qui affecte les performances de l'ensemble ; (2) changer la fonction de transfert d'une technique de pointage durant son utilisation peut avoir un effet négatif sur les performances. / Large display environments (LDEs) are interactive physical workspaces featuring one or more static large displays as well as rich interaction capabilities, and are meant to visualize and manipulate very large datasets. Research about mid-air interactions in such environments has emerged over the past decade, and a number of interaction techniques are now available for most elementary tasks such as pointing, navigating and command selection. However these techniques are often designed and evaluated separately on specific platforms and for specific use-cases or operationalizations, which makes it hard to choose, compare and combine them.In this dissertation I propose a framework and a set of guidelines for analyzing and combining the input and output channels available in LDEs. I analyze the characteristics of LDEs in terms of (1) visual output and how it affects usability and collaboration and (2) input channels and how to combine them in rich sets of mid-air interaction techniques. These analyses lead to four design requirements intended to ensure that a set of interaction techniques can be used (i) at a distance, (ii) together with other interaction techniques and (iii) when collaborating with other users. In accordance with these requirements, I designed and evaluated a set of mid-air interaction techniques for panning and zooming, for invoking commands while pointing and for performing difficult pointing tasks with limited input requirements. For the latter I also developed two methods, one for calibrating high-precision techniques with two levels of precision and one for tuning velocity-based transfer functions. Finally, I introduce two higher-level design considerations for combining interaction techniques in input-constrained environments. Designers should take into account (1) the trade-off between minimizing limb usage and performing actions in parallel that affects overall performance, and (2) the decision and adaptation costs incurred by changing the resolution function of a pointing technique during a pointing task.
|
Page generated in 0.1198 seconds