• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 51
  • 21
  • 9
  • 5
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 239
  • 239
  • 61
  • 52
  • 49
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 25
  • 25
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Orientation volontaire de l’attention visuelle chez l’homme et le macaque Rhésus / Voluntary orientation of visual attention in human and macaque monkeys

Ibos, Guilhem 01 July 2009 (has links)
L’attention visuelle est un phénomène primordial pour la perception visuelle de notre environnement. Elle correspond à l’ensemble des mécanismes qui permettent la sélection d’information visuelle dans le but de la traiter en particulier. Lorsque volontaire, son orientation est considérée comme lente, contrairement à l’orientation de l’attention visuelle involontaire, qui est rapide et réflexive. Grâce à une étude de psychophysique humaine, nous montrons que le déplacement volontaire de l’attention est rapide mais qu’un ensemble de processus cognitifs ont jusqu’à présent masquer ce phénomène.Au niveau cérébral, l’orientation de l’attention visuelle est sous tendue par un réseau d’aires,impliquant le champ oculomoteur frontal (FEF) et l’aire latérale intrapariétale (LIP). En enregistrant l’activité unitaire des neurones de ces 2 aires de 2 macaques Rhésus impliqués dans une tâche de détection de cible nécessitant l’orientation volontaire de l’attention visuelle,nous montrons que ces 2 aires jouent un rôle différent. Ainsi FEF semble impliqué dans l’orientation des capacités attentionnellles et représente également la sélection de l’objet important. LIP n’est pas impliqué dans l’orientation de l’attention visuelle, en revanche, ses neurones présentent une réponse cognitive spécifique de la détection de la cible. Nos résultats suggèrent que FEF contrôle l’orientation volontaire de l’attention visuelle alors que LIP sert à la détection de la cible.Qui plus est, nous montrons l’existence dans FEF d’une nouvelle classe de cellules impliqués dans le contrôle exécutif des fonctions cognitives et notamment attentionnelles / Visual attention is a critical process to a correct perception of our visual environment.This term includes all mechanisms involved in the selection of information in order to processit in priority. It is generally proposed that voluntary orientation of attentional capacity is a220slow and sustained process, while unvoluntary orientation is fast. We show here by apsychophysical study that voluntary orientation is in fact a rapid process that is easely maskedby others cognitiv process of general engagement.This phenomenon is sustained by a large network of cerebral areas, including the Frontal Eye Field (FEF), and the Lateral IntraParietal area (LIP). We recorded neuronal activity of 2monkey’s FEF and LIP neuronal activity while they were engaged in a attentional task. Weshow here that these 2 areas play 2 crucial differents roles. Contrary to FEF, that is highlyinvolved in attentional orientation and engagement, LIP neuronal activity present few attentional modulations. LIP and FEF cells present large cognitive activities selectives to selection of the important event of the task. We hypothesis that FEF controles the voluntary orientation of visual attention while LIP detects the target.More over, we highly the existence of a new FEF’s cell category involved in the executive control of cognitives function (as attentional).
192

Redes com dinâmica espaço-temporal e aplicações computacionais / Networks with spatio temporal dynamics in computer sciences

Marcos Gonçalves Quiles 24 March 2009 (has links)
Nas últimas décadas, testemunhou-se um crescente interesse no estudo de sistemas complexos. Tais sistemas são compostos por pelo menos dois componentes fundamentais: elementos dinâmicos individuais e uma estrutura de organização definindo a forma de interação entre estes. Devido a dinâmica de cada elemento e a complexidade de acoplamento, uma grande variedade de fenômenos espaço-temporais podem ser observados. Esta tese tem como objetivo principal explorar o uso da dinâmica espaço-temporal em redes visando a solução de alguns problemas computacionais. Com relação aos mecanismos dinâmicos, a sincronização entre osciladores acoplados, a caminhada aleatória-determinística e a competição entre elementos na rede foram considerados. Referente à parte estrutural da rede, tanto estruturas regulares baseadas em reticulados quanto redes com estruturas mais gerais, denominadas redes complexas, foram abordadas. Este estudo é concretizado com o desenvolvimento de modelos aplicados a dois domínios específicos. O primeiro refere-se à utilização de redes de osciladores acoplados para construção de modelos de atenção visual. Dentre as principais características desses modelos estão: a seleção baseada em objetos, a utilização da sincronização/ dessincronização entre osciladores neurais como forma de organização perceptual, a competição entre objetos para aquisição da atenção. Além disso, ao comparar com outros modelos de seleção de objetos baseados em redes osciladores, um número maior de atributos visuais é utilizado para definir a saliência dos objetos. O segundo domínio está relacionado ao desenvolvimento de modelos para detecção de comunidades em redes complexas. Os dois modelos desenvolvidos, um baseado em competição de partículas e outro baseado em sincronização de osciladores, apresentam alta precisão de detecção e ao mesmo tempo uma baixa complexidade computacional. Além disso, o modelo baseado em competição de partículas não só oferece uma nova técnica de detecção de comunidades, mas também apresenta uma abordagem alternativa para realização de aprendizado competitivo. Os estudos realizados nesta tese mostram que a abordagem unificada de dinâmica e estrutura é uma ferramenta promissora para resolver diversos problemas computacionais / In the last decades, an increasing interest in complex system study has been witnessed. Such systems have at least two integrated fundamental components: individual dynamical elements and an organizational structure which defines the form of interaction among those elements. Due to the dynamics of each element and the coupling complexity, various spatial-temporal phenomena can be observed. The main objective of this thesis is to explore spatial-temporal dynamics in networks for solving some computational problems. Regarding the dynamical mechanisms, the synchronization among coupled oscillators, deterministic-random walk and competition between dynamical elements are taken into consideration. Referring to the organizational structure, both regular network based on lattice and more general network, called complex networks, are studied. The study of coupled dynamical elements is concretized by developing computational models applied to two specific domains. The first refers to the using of coupled neural oscillators for visual attention. The main features of the developed models in this thesis are: object-based visual selection, realization of visual perceptual organization by using synchronization / desynchronization among neural oscillators, competition among objects to achieve attention. Moreover, in comparison to other object-based selection models, more visual attributes are employed to define salience of objects. The second domain is related to the development of computational models applied to community detection in complex networks. Two developed models, one based on particle competition and another based on synchronization of Integrate-Fire oscillators, present high detection rate and at the same time low computational complexity. Moreover, the model based on particle competition not only offers a new community detection technique, but also presents an alternative way to realize artificial competitive learning. The study realized in this thesis shows that the unified scheme of dynamics and structure is a powerful tool to solve various computational problems
193

Changes in visual attention towards food cues after obesity surgery: An eye-tracking study

Schäfer, Lisa, Schmidt, Ricarda, Müller, Silke M., Dietrich, Arne, Hilbert, Anja 11 August 2021 (has links)
Research documented the effectiveness of obesity surgery (OS) for long-term weight loss and improvements in medical and psychosocial sequelae, and general cognitive functioning. However, there is only preliminary evidence for changes in attentional processing of food cues after OS. This study longitudinally investigated visual attention towards food cues from pre- to 1-year post-surgery. Using eye tracking (ET) and a Visual Search Task (VST), attentional processing of food versus non-food cues was assessed in n = 32 patients with OS and n = 31 matched controls without weight-loss treatment at baseline and 1-year follow-up. Associations with experimentally assessed impulsivity and eating disorder psychopathology and the predictive value of changes in visual attention towards food cues for weight loss and eating behaviors were determined. During ET, both groups showed significant gaze duration biases to non-food cues without differences and changes over time. No attentional biases over group and time were found by the VST. Correlations between attentional data and clinical variables were sparse and not robust over time. Changes in visual attention did not predict weight loss and eating disorder psychopathology after OS. The present study provides support for a top-down regulation of visual attention to non-food cues in individuals with severe obesity. No changes in attentional processing of food cues were detected 1-year post-surgery. Further studies are needed with comparable methodology and longer follow-ups to clarify the role of biased visual attention towards food cues for long-term weight outcomes and eating behaviors after OS.
194

Blickbewegungsparameter als kognitive Leistungsindikatoren im eignungsdiagnostischen Kontext der Auswahl von Fluglotsen

Gayraud, Katja 25 November 2019 (has links)
Über die psychologische Eignung von Fluglotsenbewerbern und Fluglotsenbewerberinnen wird mithilfe verschiedener wissenschaftlich entwickelter Selektionsverfahren in einem mehrstufigen Auswahlprozess entschieden. Typischerweise erfolgt in diesbezüglichen Eignungsuntersuchungen derzeit die Leistungserfassung in computerbasierten kognitiven Tests durch die Anzahl richtiger und falscher Antworten sowie mittels Reaktionszeiten – dabei bleibt bislang der Weg, der im besten Fall zur Lösung einer Aufgabe führt, weitgehend verborgen. Um tiefergehende Einsichten in die perzeptiven und kognitiven Prozesse zu erlangen und den Weg vom Beginn bis zum Ende der Bearbeitung einer visuellen Aufgabe transparent zu gestalten, bedarf es anderer Methoden – wie zum Beispiel der Methode der Blickbewegungsmessung. Hierbei werden die Blickbewegungen der zu untersuchenden Teilnehmenden während der Bearbeitung einer solchen Aufgabe aufgezeichnet und anschließend ausgewertet. In Anbetracht der geringen Anzahl an Studien, die zur Beziehung zwischen dem interindividuellen Blickverhalten und kognitiven Leistungsunterschieden vorliegen, besteht ein deutlicher Bedarf an weiterführenden Untersuchungen zu dieser Thematik. Ziel der vorliegenden Arbeit ist es, Erkenntnisse über die Verwendbarkeit einer berührungsfreien Blickbewegungsmessung im eignungsdiagnostischen Kontext der Luftfahrt – speziell bezüglich der Auswahl von Nachwuchsfluglotsen/innen am Deutschen Zentrum für Luft- und Raumfahrt e. V. (DLR) – zu erlangen. Zur Erforschung dieser Fragestellung wurden zwei umfangreiche Arbeitspakete definiert und umgesetzt – zum einen die Entwicklung eines geeigneten Testverfahrens – des Eye Movement CONnflict Detection Test (CON) –, zum anderen die Untersuchung von Blickbewegungen im Kontext der Eignungsdiagnostik anhand des CON. Zur Entwicklung dieses neuen Testverfahrens wurden drei Vorstudien und eine Expertenstudie durchgeführt. Zusammenfassend legten die Ergebnisse der oben genannten Studien sowie die der Hauptstudie nahe, dass mit dem CON ein objektives, reliables und valides Messinstrument für den Einsatz in Blickbewegungsstudien vorliegt. Auf der Basis des gegenwärtigen Kenntnisstands zur Beziehung zwischen Blickbewegungsparametern und kognitiven Leistungsunterschieden wurden für die vorliegende Arbeit Hypothesen abgeleitet und in der Hauptstudie (N = 113) in Korrelations- und Regressionsanalysen überprüft. Ergänzend zu den hypothesengeleiteten Analysen erfolgten explorative Berechnungen, die als Grundlage für die Generierung von Hypothesen für zukünftige Studien dienen können. Es kristallisierten sich vier Blickbewegungsparameter als Indikatoren für die kognitive Leistung im CON heraus: die Fixationsanzahl, die relative Anzahl höhengeleiteter Übergänge als ein neu eingeführter Parameter zur Charakterisierung der Vorgehensweise im Test, die Entropie sowie – mit Einschränkung – die mittlere Fixationsdauer. Aus Kombinationen dieser Parameter konnte die Leistung im CON im statistischen Sinne vorhergesagt werden. Blickbewegungsparameter vermochten 54 % der Varianz der Gesamtleistung im CON aufzuklären. Zudem legten die Ergebnisse eines hierarchischen Regressionsmodells nahe, dass eine Kombination aus Blickbewegungsparametern zusätzlich zu den allgemeinen kognitiven Fähigkeiten, wie sie im Fluglotsenauswahlverfahren am DLR erfasst werden, 26 % der Gesamtleistung im CON aufklären. Zusammengefasst liefert die vorliegende Arbeit vielversprechende Ergebnisse bezüglich der Beziehung zwischen Blickbewegungsparametern und interindividuellen kognitiven Leistungsunterschieden und zeigt das große Potential der Blickbewegungsmessung für einen zukünftigen Einsatz im Rahmen der Auswahl von Nachwuchsfluglotsen/innen auf. Weitere technische Verbesserungen der Blickbewegungsmessgeräte sowie zusätzliche Forschungserkenntnisse – insbesondere zur prognostischen Validität – sind empfehlenswert, um basierend auf Blickbewegungsanalysen eigenständige Entscheidungskriterien abzuleiten.
195

Using Secondary Notation to Influence the Model User's Attention

Stark, Jeannette 17 May 2017 (has links)
Recently cognitive principles have been discussed for Conceptual Modeling with the aim to increase domain understanding, model comprehension and modeling efficiency. In particular, the principle of Perceptual Discriminability, which discusses the visual differences of modeling constructs, reveals potential for model comprehension if human attention is influenced in a way that important modeling constructs are more easily detected, and can hence faster be processed. Yet, so far no conditions how the human gaze can be influenced have been defined and evaluated for Conceptual Modeling. This dissertation extends Perceptual Discriminability for conditions to attract human attention for those constructs that are important for model comprehension. Furthermore, these conditions are applied to constructs of two different modeling grammars in general as well as to elements of the process flow of Business Process Models. To evaluate the results a laboratory experiment of extended Perceptual Discriminability is described in which significant differences have been identified for process flow comprehension. For the demonstration of the potential of extended Perceptual Discriminability BPMN secondary notation is improved by emphasizing those constructs that are most important for model comprehension. Therefore, those constructs that are important for model comprehension have been identified within a content analysis and have been worked on according to the conditions of extended Perceptual Discriminability for those visual variables that are free for an application in secondary notation.:Preface ii Abstract iii Table of contents iv Table of Figures v List of Tables vi List of Abbreviations vii Part 1 - Summary Paper 1 1. Motivation 2 2. Research design 7 2.1 Research objectives 7 2.2 Scope 9 2.3 Research method 11 3. Structure of the dissertation 13 4. Contribution to theory and practice 17 5. Future Research Ideas 19 Part 2 - Publications 20 Publication 1 21 Publication 2 22 Publication 3 23 Publication 4 24 Publication 5 25 Publication 6 26 Literature 27 Part 3 - Appendix 30
196

Veränderungsblindheit: Drei explorative Untersuchungen in statischer und dynamischer verkehrsbezogener Umgebung

Dornhöfer, Sascha M. 19 April 2005 (has links)
Veränderungsblindheit tritt auf, wenn das Bewegungssignal einer Veränderung verdeckt wird oder der Betrachter von der Veränderung abgelenkt wird. In beiden Fällen kann die visuelle Aufmerksamkeit, mangels Hinweisreiz, nicht zum Ort der Veränderung gelenkt werden. Nach einer Erörterung von Augenbewegungen und ihrem Zusammenhang mit Veränderungsblindheit werden drei explorative Untersuchungen zur Veränderungsblindheit im Kontext des Straßenverkehrs vorgestellt. Untersuchung 1 befasst sich mit einem direkten Vergleich dreier unterschiedlicher Verdeckungsarten (Lidschläge, Blicksprünge, Blanks) bei statischem Stimulusmaterial (Fotos). Insgesamt führen die Ergebnisse zu dem Schluss, dass Veränderungsblindheit, unabhängig von der Verdeckungsart, ein Grund für zu spät oder nicht erkannte Gefahren im Straßenverkehr sein könnte, wenngleich sie für die gefährlichsten Situationen (relevante Additionen) am geringsten ausfällt und künstliche Blanks sich, zumindest in einer statischen Bedingung, gut zur Simulation von Lidschlägen und Sakkaden eignen. Darüber hinaus zeigen sich deutliche Hinweise zur impliziten Veränderungsentdeckung. Untersuchung 2 überprüft Teile von Untersuchung 1 in dynamischer Umgebung (Fahrsimulator) und findet überraschenderweise einen umgekehrten Effekt von Veränderungsblindheit. Die Echtheit des Effektes wird angezweifelt und auf die Nutzung von Abzählstrategien zurückgeführt. Unabhängig davon zeigen sich erneut Hinweise zur impliziten Entdeckung. Untersuchung 3 stellt schließlich einen direkten Vergleich zwischen statischer (Fotos) und dynamischer Umgebung (Filme) vor und zeigt, dass das Ausmaß an Veränderungsblindheit, unabhängig von Verdeckungsdauer und Veränderungsart, in dynamischer Umgebung größer ist als in statischer (85% vs. 64%) und daher eine Gefahr im Straßenverkehr darstellt. Wieder zeigen sich Hinweise auf eine implizite Entdeckung. Die Arbeit schließt mit einem grundlagen- und anwendungsorientierten Ausblick.
197

Depictions of Female Body Types in Advertising: How Regional Visual Attention, Body Region Satisfaction, Media Influence, and Drive for Thinness Relate

Adams, Dallin Russell 02 March 2020 (has links)
Through continuing technological advancement, increased media exposure occurs as consumers are able to obtain access more easily. Various media formats, including video, are a means whereby consumers gather information about the world around them, and continually make comparisons between that information and themselves. Among the information obtained from media channels is how bodies are portrayed in the media. Comparisons between media images of body and self-perceptions of body are particularly prevalent in women. The current study employs the use of eye-tracking to examine how women view other women's body types and areas of the body in video-based advertising. The study also employs self-report measures to further understand how individual body region satisfaction, drive for thinness, and media influence relate. Findings indicate that women, regardless of personal satisfaction, tend to look longer at thin women than plus-sized or average women. Furthermore, media pressures and internalization were found to play a strong role in women's drive for thinness and personal satisfaction, while media as a source of information played no such role.
198

Deep Learning-Based Vehicle Recognition Schemes for Intelligent Transportation Systems

Ma, Xiren 02 June 2021 (has links)
With the increasing highlighted security concerns in Intelligent Transportation System (ITS), Vision-based Automated Vehicle Recognition (VAVR) has attracted considerable attention recently. A comprehensive VAVR system contains three components: Vehicle Detection (VD), Vehicle Make and Model Recognition (VMMR), and Vehicle Re-identification (VReID). These components perform coarse-to-fine recognition tasks in three steps. The VAVR system can be widely used in suspicious vehicle recognition, urban traffic monitoring, and automated driving system. Vehicle recognition is complicated due to the subtle visual differences between different vehicle models. Therefore, how to build a VAVR system that can fast and accurately recognize vehicle information has gained tremendous attention. In this work, by taking advantage of the emerging deep learning methods, which have powerful feature extraction and pattern learning abilities, we propose several models used for vehicle recognition. First, we propose a novel Recurrent Attention Unit (RAU) to expand the standard Convolutional Neural Network (CNN) architecture for VMMR. RAU learns to recognize the discriminative part of a vehicle on multiple scales and builds up a connection with the prominent information in a recurrent way. The proposed ResNet101-RAU achieves excellent recognition accuracy of 93.81% on the Stanford Cars dataset and 97.84% on the CompCars dataset. Second, to construct efficient vehicle recognition models, we simplify the structure of RAU and propose a Lightweight Recurrent Attention Unit (LRAU). The proposed LRAU extracts the discriminative part features by generating attention masks to locate the keypoints of a vehicle (e.g., logo, headlight). The attention mask is generated based on the feature maps received by the LRAU and the preceding attention state generated by the preceding LRAU. Then, by adding LRAUs to the standard CNN architectures, we construct three efficient VMMR models. Our models achieve the state-of-the-art results with 93.94% accuracy on the Stanford Cars dataset, 98.31% accuracy on the CompCars dataset, and 99.41% on the NTOU-MMR dataset. In addition, we construct a one-stage Vehicle Detection and Fine-grained Recognition (VDFG) model by combining our LRAU with the general object detection model. Results show the proposed VDFG model can achieve excellent performance with real-time processing speed. Third, to address the VReID task, we design the Compact Attention Unit (CAU). CAU has a compact structure, and it relies on a single attention map to extract the discriminative local features of a vehicle. We add two CAUs to the truncated ResNet to construct a small but efficient VReID model, ResNetT-CAU. Compared with the original ResNet, the model size of ResNetT-CAU is reduced by 60%. Extensive experiments on the VeRi and VehicleID dataset indicate the proposed ResNetT-CAU achieve the best re-identification results on both datasets. In summary, the experimental results on the challenging benchmark VMMR and VReID datasets indicate our models achieve the best VMMR and VReID performance, and our models have a small model size and fast image processing speed.
199

L'attention sélective et les traits visuels dans la correspondance transsaccadique / The role of visual attention and features in the transsaccadic correspondence

Eymond, Cécile 30 November 2016 (has links)
Chaque saccade oculaire décale brusquement l'image projetée sur la rétine. Pourtant notre perception du monde reste stable et uniforme car le système visuel fait correspondre les informations avant et après chaque saccade. Pour établir cette correspondance, les mécanismes attentionnels seraient fondamentaux. Jusqu'à présent, ce lien transsaccadique a été mis en évidence par des études portant essentiellement sur le traitement des informations spatiales - à savoir, comment la position rétinienne d'un objet est corrigée à chaque saccade pour maintenir une perception stable du monde. Le traitement des traits visuels tels que la couleur ou la forme est encore mal compris et leur rôle dans l'impression de stabilité reste à établir. Est-ce que les traits et l'attention dédiée aux traits (feature-based attention), par définition indépendants de l'espace, participent aussi à la correspondance transsaccadique ? Pour analyser la relation entre le traitement des traits et celui des positions lors des saccades oculaires, cette thèse a suivi deux approches. La première s'est intéressée à la perception des attributs visuels, uniforme malgré l'hétérogénéité du système visuel. Les résultats ont montré que si la perception uniforme des attributs visuels s'appuie sur un apprentissage, les mécanismes sous-jacents ne seraient pas spécifiques aux mouvements oculaires. L'uniformité de la perception s'appuierait plutôt sur un mécanisme d'apprentissage associatif général. La seconde approche a cherché à mieux comprendre la nature de l'attention sélective transsaccadique. Les résultats ont montré que l'attention allouée à la cible d'une saccade ne contribue pas à aux mécanismes sélectifs guidés par les traits et engagés juste après l'exécution d'un mouvement oculaire. L'attention allouée à une cible saccadique et l'attention aux traits seraient alors indépendantes. Enfin, la dernière étude a montré que, lorsque l'attention sélective basée sur les traits est engagée pendant la préparation de la saccade en dehors de la cible saccadique, les traits sont maintenus pendant la saccade et affectent les processus sélectifs engagés juste après la saccade. L'attention transsaccadique ne serait alors pas de nature purement spatiale. L'ensemble de ces résultats suggère que les traits et l'attention aux traits joueraient un rôle dans la correspondance transsaccadique. / With each saccade, the image on the retina shifts abruptly but our perception of the surrounding world remains stable and uniform, because the visual system matches pre- and post-saccadic visual information. Attentional mechanisms could play a fundamental role in this process and numerous studies have examined the role of spatial attention. The processing of feature-based attention across saccades remains unclear and its role in matching pre- to post-saccadic visual information is not known. Do visual features and feature-based attention, assumed to enhance the feature-specific representations throughout the visual field, take part in the transsaccadic correspondence? To examine the relationship between feature and spatial processing, this thesis chose two approaches. The first one considered the uniform perception that we have for features despite the heterogeneity of the retina. Results show that, if the transsaccadic correspondence of visual features relies on learning, the underlying mechanisms would not be specific to eye movements. Visual constancy is more likely to arise from a general associative learning. The second approach examined the nature of transsaccadique attention. Results show that attention drawn to the saccade target did not contribute to selective mechanisms engaged just after an eye movement, suggesting a dissociation between feature-based attention and saccade programming. Finally, the last study show that feature-based selectivity is maintained across saccades to ensure spatiotopic correspondence, pointing out the potential role of feature-based attention in matching pre- to post-saccadic information.
200

The Contribution of Eye Tracking to Quality of Experience Assessment of 360-degree video

van Kasteren, Anouk January 2019 (has links)
The research domain on the Quality of Experience (QoE) of 2D video streaming has been well established. However, a new video format is emerging and gaining popularity and availability: VR 360-degree video. The processing and transmission of 360-degree videos brings along new challenges such as large bandwidth requirements and the occurrence of different distortions. The viewing experience is also substantially different from 2D video, it offers more interactive freedom on the viewing angle but can also be more demanding and cause cybersickness. Further research on the QoE of 360-videos specifically is thus required.The first goal of this thesis is to complement earlier research by (Tran, Ngoc, Pham, Jung, and Thank, 2017) testing the effects of quality degradation, freezing, and content on the QoE of 360-videos. The second goal is to test the contribution of visual attention as influence factor in the QoE assessment. Data will be gathered through subjective tests where participants watch degraded versions of 360-videos through an HMD with integrated eye-tracking sensors. After each video they will answer questions regarding their quality perception, experience, perceptual load, and cybersickness.Results of the first part show overall rather low QoE ratings and it decreases even more as quality is degraded and freezing events are added. Cyber sickness was found not to be an issue. The effects of the manipulation on visual attention were minimal. Attention was mainly directed by content, but also by surprising elements. The addition of eye-tracking metrics did not further explain individual differences in subjective ratings. Nevertheless it was found that looking at moving objects increased the negative effect of freezing events and made participants less sensitive for quality distortions. The results of this thesis alone are not enough to successfully regard visual attention as an influence factor in 360-video.

Page generated in 0.0777 seconds