Spelling suggestions: "subject:"multimodales atemsystem"" "subject:"multimodales systsystem""
31 |
Enhancing Software Quality of Multimodal Interactive Systems / Verbesserung der Softwarequalität multimodaler interaktiver SystemeFischbach, Martin Walter January 2017 (has links) (PDF)
Multimodal interfaces (MMIs) are a promising human-computer interaction paradigm.
They are feasible for a wide rang of environments, yet they are especially suited if interactions are spatially and temporally grounded with an environment in which the user is (physically) situated.
Real-time interactive systems (RISs) are technical realizations for situated interaction environments, originating from application areas like virtual reality, mixed reality, human-robot interaction, and computer games.
RISs include various dedicated processing-, simulation-, and rendering subsystems which collectively maintain a real-time simulation of a coherent application state.
They thus fulfil the complex functional requirements of their application areas. Two contradicting principles determine the architecture of RISs: coupling and cohesion.
On the one hand, RIS subsystems commonly use specific data structures for multiple purposes to guarantee performance and rely on close semantic and temporal coupling between each other to maintain consistency.
This coupling is exacerbated if the integration of artificial intelligence (AI) methods is necessary, such as for realizing MMIs.
On the other hand, software qualities like reusability and modifiability call for a decoupling of subsystems and architectural elements with single well-defined purposes, i.e., high cohesion.
Systems predominantly favour performance and consistency over reusability and modifiability to handle this contradiction.
They thus accept low maintainability in general and hindered scientific progress in the long-term.
This thesis presents six semantics-based techniques that extend the established entity-component system (ECS) pattern and pose a solution to this contradiction without sacrificing maintainability: semantic grounding, a semantic entity-component state, grounded actions, semantic queries, code from semantics, and decoupling by semantics.
The extension solves the ECS pattern's runtime type deficit, improves component granularity, facilitates access to entity properties outside a subsystem's component association, incorporates a concept to semantically describe behavior as complement to the state representation, and enables compatibility even between RISs.
The presented reference implementation Simulator X validates the feasibility of the six techniques and may be (re)used by other researchers due to its availability under an open-source licence.
It includes a repertoire of common multimodal input processing steps that showcase the particular adequacy of the six techniques for such processing.
The repertoire adds up to the integrated multimodal processing framework miPro, making Simulator X a RIS platform with explicit MMI support.
The six semantics-based techniques as well as the reference implementation are validated by four expert reviews, multiple proof of concept prototypes, and two explorative studies.
Informal insights gathered throughout the design and development supplement this assessment in the form of lessons learned meant to aid future development in the area. / Multimodale Schnittstellen sind ein vielversprechendes Paradigma der Mensch-Computer-Interaktion. Sie sind in einer Vielzahl von Umgebungen einsetzbar und eignen sich besonders wenn Interaktionen zeitlich und räumlich mit einer Umgebung verankert sind in welcher der Benutzer (physikalisch) situiert ist.
Interaktive Echtzeitsysteme (engl. Real-time Interactive Systems, RIS) sind technische Umsetzungen situierter Interaktionsumgebungen, die vor allem in Anwendungsgebieten wie der virtuellen Realität, der gemischten Realität, der Mensch-Roboter-Interaktion und im Bereich der Computerspiele eingesetzt werden. Interaktive Echtzeitsysteme bestehen aus vielfältigen dedizierten Subsystemen, die zusammen die Echtzeitsimulation eines kohärenten Anwendungszustands aufrecht erhalten und die komplexen funktionalen Anforderungen des Anwendungsgebiets erfüllen. Zwei gegensätzliche Prinzipien bestimmen die Softwarearchitekturen interaktiver Echtzeitsysteme: Kopplung und Kohäsion.
Einerseits verwenden Subsysteme typischerweise spezialisierte Datenstrukturen um Performanzanforderungen gerecht zu werden. Um Konsistenz aufrecht zu erhalten sind sie zudem auf enge zeitliche- und semantische Abhängigkeiten untereinander angewiesen. Diese enge Kopplung wird verstärkt, falls Methoden der künstlichen Intelligenz in das RIS integriert werden müssen, wie es für die Umsetzung multimodaler Schnittstellen der Fall ist.
Andererseits bedingen Softwarequalitätsmerkmale wie Wiederverwendbarkeit und Modifizierbarkeit die Entkopplung von Subsystemen und Architekturelementen und fordern hohe Kohäsion.
Bestehende Systeme lösen diesen Konflikt überwiegend zu Gunsten von Performanz und Konsistenz und zu Lasten von Wiederverwendbarkeit und Modifizierbarkeit. Insgesamt wird auf diese Weise geringe Wartbarkeit akzeptiert und auf lange Sicht der wissenschaftliche Fortschritt eingeschränkt.
Diese Arbeit stellt sechs Softwaretechniken auf Basis von Methoden der Wissensrepräsentation vor, welche das etablierte Entity-Component System (ECS) Entwurfsmuster erweitern und eine Lösung des Konflikts darstellen, die die Wartbarkeit nicht missachtet: semantic grounding, semantic entity-component state, grounded actions, semantic queries, code from semantics und decoupling by semantics.
Diese Erweiterung löst das Introspektionsdefizit des ECS-Musters, verbessert die Granularität von ECS-Komponenten, erleichtert den Zugriff auf Entity-Eigenschaften außerhalb der Subsystem-Komponentenzuordnung, beinhaltet ein Konzept zur einheitlichen Beschreibung von Verhalten als Komplement zur Zustandsrepräsentation und ermöglicht sogar Kompatibilität zwischen interaktiven Echtzeitsystemen.
Die vorgestellte Referenzimplementierung Simulator X weist die technische Machbarkeit der sechs Softwaretechniken nach. Sie kann von anderen Forschern auf Basis einer Open-Source Lizenz (wieder)verwendet werden und beinhaltet ein Repertoire an üblichen Verarbeitungsschritten für multimodalen Eingaben, welche die besondere Eignung der sechs Softwaretechniken für eine solche Eingabeverarbeitung veranschaulichen. Dieses Repertoire bildet zusammen das integrierte multimodale Eingabeverarbeitungs-Framework miPro und macht damit Simulator X zu einem RIS, welches explizit die Umsetzung von multimodalen Schnittstellen unterstützt.
Die sechs Softwaretechniken sowie die Referenzimplementierung sind durch vier Expertengutachten, eine Vielzahl an technischen Demonstrationen sowie durch zwei explorative Studien validiert. Informelle Erkenntnisse, die während Design und Entwicklung gesammelt wurden, ergänzen diese Beurteilung in Form von lessons learned, welche bei künftigen Entwicklungsarbeiten in diesem Gebiet helfen sollen.
|
32 |
Software-Infrastruktur und Entwicklungsumgebung für selbstorganisierende multimediale Ensembles in Ambient-Intelligence-UmgebungenHellenschmidt, Michael. Unknown Date (has links)
Darmstadt, Techn. Universiẗat, Diss., 2007. / Dateien im PDF-Format.
|
33 |
Multimodal interaction with mobile devices: fusing a broad spectrum of modality combinationsWasinger, Rainer January 2006 (has links)
Zugl.: Saarbrücken, Univ., Diss., 2006
|
34 |
Multimodal interaction with mobile devices : fusing a broad spectrum of modality combinations /Wasinger, Rainer. January 1900 (has links)
Thesis (doctoral) - Univ., Saarbrücken, 2006. / Includes bibliographical references and index.
|
35 |
Large scale mining and retrieval of visual data in a multimodal contextQuack, Till January 2009 (has links)
Zugl.: Zürich, Techn. Hochsch., Diss.
|
36 |
Cross-modal mechanisms: perceptual multistability in audition and visionGrenzebach, Jan 25 May 2021 (has links)
Perceptual multistability is a phenomenon that is mostly studied in all modalities separately. The phenomenon reveals fundamental principles of the perceptual system in the formation of an emerging cognitive representation in the consciousness. The momentary perceptual organizations evoked during the stimulation with ambiguous stimuli switches between several perceptual organizations or percepts: The auditory streaming stimulus in audition and the moving plaids stimulus in vision, elicit different at least two percepts that dominate awareness exclusively for a random phase or dominance duration before an inevitable switch to another percept occurs. The similarity in the perceptual experience has led to propose a global mechanism contributing to the perceptual multistability phenomena crossmodally. Contrary, the difference in the perceptual experience has led to propose a distributed mechanism that is modality-specific. The development of a hybrid model has synergized both approaches. We accumulate empirical evidence for the contribution of a global mechanism, albeit distributed mechanisms play an indispensable role in this cross-modal interplay. The overt report of the perceptual experience in our experiments is accompanied by the recording of objective, cognitive markers of the consciousness: Reflexive movements of the eyes, namely the dilation of the pupil and the optokinetic nystagmus, correlate with the unobservable perceptual switches and perceptual states respectively and have their neuronal rooting in the brainstem. We complement earlier findings on the sensitivity of the pupil to visual multistability: It was shown in two independent experiments that the pupil dilates at the time of reported perceptual switches in auditory multistability. A control condition on confounding effects from the reporting process confines the results. Endogenous, evoked internally by the unchanged stimulus ambiguity, and exogenous, evoked externally by the changes in the physical properties of the stimulus, perceptual switches could be discriminated based on the maximal amplitude of the dilation. The effect of exogenous perceptual has on the pupil were captured in a report and no-report task to detect confounding perceptual effects. In two additional studies, the moment-by-moment coupling and coupling properties of percepts between concurrent multistable processes in audition, evoked by auditory streaming, and in vision, evoked by moving plaids, were found crossmodally. In the last study, the externally induced percept in the visual multistable process was not relayed to the simultaneous auditory multistable process: Still, the observed general coupling is fragile but existent. The requirement for the investigation of a moment-by-moment coupling of the multistable perceptual processes was the application of a no-report paradigm in vision: The visual stimulus evokes an optokinetic nystagmus that has machine learnable different properties when following either of the two percepts. In combination with the manually reported auditory percept, attentional bottlenecks due to a parallel report were circumvented. The two main findings, the dilation of the pupil along reported auditory perceptual switches and the crossmodal coupling of percepts in bimodal audiovisual multistability, speak in favor of a partly global mechanism being involved in control of perceptual multistability; the global mechanism is incarcerated by the, partly independent, distributed competition of percepts on modality level. Potentially, supramodal attention-related modulations consolidate the outcome of locally distributed perceptual competition in all modalities.:COVER 1
BIBLIOGRAPHISCHE BESCHREIBUNG 2
ACKNOWLEDGEMENTS 3
CONTENTS 4
CHAPTER 1: Introduction 6
C1.1: Stability and uncertainty in perception 6
C1.2: Auditory, visual and audio-visual multistability 14
C1.3: Capturing the subjective perceptual experience 25
C1.4: Limitations of preceding studies, objectives, and outline of the Thesis 33
CHAPTER 2: Study 1 “Pupillometry in auditory multistability” 36
C2.1.1 Experiment 1: Introduction 36
C2.1.2 Experiment 1: Material and Methods 38
C2.1.3 Experiment 1: Data analysis 44
C2.1.4 Experiment 1: Results 48
C2.1.5 Experiment 1: Discussion 52
C2.2.1 Experiment 2: Introduction 54
C2.2.2 Experiment 2: Material and Methods 54
C2.2.3 Experiment 2: Data analysis 56
C2.2.4 Experiment 2: Results 57
C2.3 Experiment 1 & 2: Discussion 61
C2.4 Supplement Study 1 65
CHAPTER 3: Study 2 “Multimodal moment-by-moment coupling in perceptual bistability” 71
C3.1.1 Experiment 1: Introduction 71
C3.1.2 Experiment 1: Results 74
C3.1.3 Experiment 1: Discussion 80
C3.1.4 Experiment 1: Material and Methods 84
C3.1.5 Experiment 1: Data analysis 87
C3.2 Supplement Study 2 92
CHAPTER 4: Study 3 “Boundaries of bimodal coupling in perceptual bistability” 93
C4.1.1 Experiment 1: Introduction 93
C4.1.2 Experiment 1: Material and Methods 98
C4.1.3 Experiment 1: Data analysis 102
C4.1.4 Experiment 1: Results 108
C4.1.5 Experiment 1: Discussion 114
C4.2.1 Experiment 2: Introduction 116
C4.2.2 Experiment 2: Material and Methods 119
C4.2.3 Experiment 2: Data analysis 125
C4.2.4 Experiment 2: Results 133
C4.3 Experiment 1 & 2: Discussion 144
C4.4 Supplement Study 3 151
CHAPTER 5: General Discussion 154
C5.1 Significance for models of multistability and implications for the perceptual architecture 162
C5.2 Recommendations for future research 166
C5.3 Conclusion 168
REFERENCES 170
APPENDIX 186
A1: List of Figures 186
A2: List of Tables 188
A3: List of Abbreviations and Symbols 189
|
37 |
From geometrical modelling to simulation of touch of textile products - open modelling issuesKyosev, Yordan 24 May 2023 (has links)
The touch of textile products is a complex process, depending on the interaction between the
human finger and the textile product. The evaluation of the touch, or so named handle properties is
complex process, requiring samples, testing humans or special testing devices. The numerical
evaluation of the surface is until now not reported, because of the complexity of the textile products.
This work presents the current state of the 3D modelling of textile products at yarn and fiber level
and the required additional steps in order these models to get applicable for numerical simulation of
the fabric touch. This work cover only the aspects related to the textile representation and do not
include the modelling of the human finger as mechanical and receptor system during the interaction. / Die Berührung und der Griffevaluation von Textilprodukten sind komplexe Prozesse, die von
der Interaktion zwischen dem menschlichen Finger und dem Textilprodukt abhängig sind. Die
Bewertung der 'Touch'- oder so genannten Griffeigenschaften ist ein komplexer Prozess, der
Proben, Testpersonen oder spezielle Testgeräte erfordert. Die numerische Simulation der
Oberflächenbeschaffenheiten ist aufgrund der Komplexität der textilen Produkte bisher nicht
bekannt. Diese Arbeit stellt den aktuellen Stand der 3D-Modellierung von Textilprodukten auf
Garn- und Faserebene und die erforderlichen zusätzlichen Schritte vor, damit diese Modelle für die
numerische Simulation der Haptik von Textilien eingesetzt werden können. Es werden nur die
Aspekte abgedeckt, die sich auf die Darstellung der Textilien beziehen und beinhaltet nicht die
Modellierung des menschlichen Fingers als mechanisches und rezeptives System während der
Interaktion.
|
Page generated in 0.0446 seconds