Spelling suggestions: "subject:"irtual reality / VR"" "subject:"birtual reality / VR""
41 |
Mixed Reality Assistenzsystem zur visuellen Qualitätsprüfung mit Hilfe digitaler ProduktfertigungsinformationenAdwernat, Stefan, Neges, Matthias 06 January 2020 (has links)
In der industriellen Fertigung unterliegen die Produkteigenschaften und -parameter, unabhängig vom eingesetzten Fertigungsverfahren, gewissen Streuungen. Im Rahmen der Qualitätsprüfung wird daher ermittelt, inwieweit die festgelegten Qualitätsanforderungen an das Produkt bzw. Werkstück trotz der Fertigungsstreuungen erfüllt werden (Brunner et al. 2011) [...] Insbesondere bei einer visuellen Prüfung durch den Menschen hängt das Ergebnis jedoch sehr stark vom jeweiligen Prüfwerker ab. Die wesentlichen Faktoren für die Erkennungsleistung sind Erfahrung, Qualifizierung und Ermüdung des Prüfers, Umgebungsbedingungen, wie Beleuchtung, Schmutz oder akustische Störfaktoren, aber auch die Anzahl und Gewichtung der zu bewertenden Merkmale (Keferstein et al. 2018). Infolge dessen kann die Zuverlässigkeit und Reproduzierbarkeit der Prüfergebnisse negativ beeinflusst werden. Gleiches gilt für die vollständige und konsistente Dokumentation der Sichtprüfung [...] Vor diesem Hintergrund wird ein Mixed Reality-basiertes Assistenzsystem entwickelt, welches den Prüfwerker bei der Durchführung und Dokumentation der visuellen Sichtprüfung unterstützen soll. Die Anforderungen dieses Ansatzes sind aus einem Kooperationsprojekt in der Automobilindustrie abgeleitet. Das dargestellte Assistenzsystem ist daher Teil von übergeordneten Aktivitäten im Zusammenhang mit 3D-Master und einer zeichnungsfreien Produktdokumentation. [...aus der Einleitung]
|
42 |
Virtual Prototyping als agile Feedback-Methode für frühe ProduktentwicklungsphasenDudczig, Manuel 06 January 2020 (has links)
Der Beitrag gibt einerseits eine Übersicht über die Möglichkeiten von virtuellen Produktdarstellungen durch Virtual Reality (VR), Augmented Reality (AR) und 360° Medien und vergleicht diese hinsichtlich geeigneter Kriterien um eine gezielte Kommunikation zu erreichen. [... aus der Einleitung]
|
43 |
Digitalisering av instruktioner för maskinoperatörer / Digitalisation of instructions for machine operatorsPatel, Sharmila, Sritharan, Aron January 2020 (has links)
Detta examensarbete genomfördes hos ett tillverkande företag utanför Stockholm. Företagets produktion består delvis av egenbyggda maskiner. Lämpliga instruktioner för underhåll, riggning samt felsökning fanns inte tillgängliga för dessa maskiner. En av de mest kritiska avdelningarna hos företaget kallas för klipp- och rullavdelningen. För tillfället finns endast en operatör i avdelningen som känner till samtliga maskiners funktioner. Denna operatör innehar så kallad tyst kunskap. Målet med detta examensarbete har varit att dokumentera tyst kunskap genom digitalisering av instruktioner. Efter att mål, problemdefinition och tillvägagångssätt fastställts gjordes en faktainsamling som bestod av primär- och sekundärdata. Diskussioner med företagets VD och de operatörer som arbetar i den aktuella avdelningen genomfördes. Detta bidrog till en djupare förståelse av problemet som i sin tur skapade en grund för hur digitaliseringen skulle ske. Arbetet bestod även av en teoretisk referensram vars syfte var att bidra till förståelse av vetenskaplig karaktär. I den teoretiska referensramen studerades även det nya konceptet för digitalisering inom industrin och dess påverkan på små och medelstora företag (SMF). Digitaliseringen karaktäriseras av smart teknologi. I dagsläget anses inte vissa smarta teknologier, så som virtual reality (virtuell verklighet) och augmented reality (förstärkt verklighet) tillämpbara hos företaget. Augmented reality (AR) kan dock komma att underlätta reparation och liknande arbetsuppgifter i framtiden. Den metod som valdes för att ta fram instruktioner var videoinspelning. Tillämpning av detta ansågs vara ett adekvat val då projektet utfördes. För att kunna spela in, redigera och använda videoinstruktioner krävs dock kompatibel hård- och mjukvara. De rekommendationer som gavs till företaget var att lagra filerna på ett intranät eller någon molntjänst som skapar lättillgänglighet. Förutom videoinstruktioner rekommenderas även praktisk handledning av en erfaren operatör. Detta skapar en kombination av praktisk och teoretisk inlärning. Instruktioner för hur företaget ska redigera och lagra videoinspelningarna togs fram för att skapa ett standardiserat arbete. / This project was conducted at a manufacturing company outside of Stockholm. Production is partially operated by machines that are manufactured by the company; however, appropriate maintenance instructions, machine settings and error search are currently unavailable for these machines. The most critical department within the workshop is called “cut and roll”, the department only has one operator who has full knowledge of the machines and their functions. This type of knowledge possessed by the operator is called tacit knowledge. The aim of this project is to document and transfer the tacit knowledge through digitalisation of instructions. After defining the problem, methods and aims of the project, primary and secondary data was collected. Furthermore, discussions were held with the chief executive of the company and the operators working in the cut and roll department. This led to a deeper understanding of the main problem and created a foundation for the methods used to digitalise instructions. The project also contains a literature review to conduce the scientific aspects of the problem. Moreover, the upcoming concept regarding digitalisation within manufacturing companies and its effect on small and medium-sized enterprises (SME) was also taken into consideration. The concept is characterised by smart technology. These types of technologies, e.g. virtual reality (VR) and augmented reality (AR), are not adequate to implement in the company today. Even so, AR could be a complementary tool regarding maintenance and similar tasks in the future. The method used to digitalise instructions was video recording, serving as the most adequate digital tool whilst executing the project. However, compatible hardware and software is necessary to create video instructions. Recommendations proposed to the company are that instructions should be stored in an intranet or cloud; and, in addition to video instructions, practical supervision given by an experienced operator is recommended. Thereby giving a combination of both practical and theoretical learning. Instructions on how to edit and store the videos was created to maintain standardized working methods.
|
44 |
Virtual Reality based Study to Analyse Pedestrian attitude towards Autonomous VehiclesPillai, Anantha Krishna January 2017 (has links)
What are pedestrian attitudes towards driverless vehicles that have no human driver? In this paper, we use virtual reality to simulate a virtual scene where pedestrians interact with driverless vehicles. This was an exploratory study where 15 users encounter a driverless vehicle at a crosswalk in the virtual scene. Data was collected in the form of video and audio recordings, semi-structured interviews and participant sketches to explain the crosswalk scenes they experience. An interaction design framework for vehicle-pedestrian interaction in an autonomous vehicle has been suggested which can be used to design and model driverless vehicle behaviour before the autonomous vehicle technology is deployed widely. / Vad är fotgängares inställning till förare utan fordon som inte har någon mänsklig förare? I det här dokumentet använder vi virtuell verklighet för att simulera en virtuell scen där fotgängare interagerar med förare utan bil. Det här var en undersökande studie där 15 användare möter ett förarefritt fordon vid en korsning i den virtuella scenen. Uppgifterna samlades i form av video- och ljudinspelningar, halvstrukturerade intervjuer och deltagarskisser för att förklara de övergripande scenerna de upplever. En ram för interaktionsdesign för fordonets fotgängarinteraktion i ett autonomt fordon har föreslagits, vilket kan användas för att utforma och modellera körlösa fordonsbeteenden innan den autonoma fordonstekniken används brett.
|
45 |
The effect of CMS with AR on drivingperformanceZhang, Miao, Bin, Gao January 2022 (has links)
This Master Thesis was conducted in the Industrial Design Engineering program at theChalmers University of Technology in collaboration with RISE and Volvo Cars. The aimwas to investigate the difference in driving performance between a traditional mirror, CameraMonitoring System (CMS), and CMS with augmented reality information (AR). It wasfurthermore to develop guidelines for applying this knowledge when designing CMS forincreased user performance in cars. Literature studies, expert interviews, workshops, anduser tests were used to discover this knowledge.The user test was conducted in a virtual environment, with four driving scenarios definedfor testing. The scenes and animations for the test were built in Unity, and the test wasconducted in a simulated driving environment with VR-rig. Four categories of data werecollected in the test. Twenty-one participants from Volvo Cars completed the test and providedrelevant feedback on the design of CMS & AR.The user test results revealed that the participants’ driving performance using CMS (withoutaugmented information) did not improve over traditional mirrors. Most participantsindicated that they would only upgrade from a traditional mirror to a CMS car with AR,rather than just a CMS car, as CMS did not provide enough benefit over traditional mirrors.The paper also discussed possible reasons behind this finding.The feedback and suggestions from the participants on the design of CMS & AR obtainedthrough questionnaires and interviews are organized into a guideline on the design ofCMS & AR. In addition, this paper gave recommendations for future study.Finally, this paper also discussed the challenges and experiences encountered in this study.Among them, the limitation of doing tests in a VR environment is highlighted to helpfuture CMS research and testing. / Safe Car Driving with Head Up Displays andCamera Monitor Systems: (SCREENS) (Vinnova Dnr: 2020-05129)
|
46 |
Using Virtual Reality to Produce 3-D Graphical Simulation of the Construction and Use of Dougong in Chinese Architecture Emphasizing the Song and Qing DynastiesHao, Shilun 18 September 2018 (has links)
No description available.
|
47 |
Virtual reality designprinciples: A casestudy on VRChatEckmann, Peter January 2024 (has links)
Virtual Reality is rapidly growing in popularity. This new medium offers manypossibilities for those who wish to explore what the technology provides in the 3Denvironment. However, the design principles that have been used for decades revolvearound 2D surfaces and applying them in the 3D space can cause severalincompatibility issues that diminishes the user experience.This study aimed to help highlight what aspects of virtual reality need to be improvedcompared to non-virtual reality platforms to enhance the user experience. To do this,VRChat, a virtual reality platform was chosen, which can be used by both VR, and nonVR headset users alike. By comparing these two user bases it could help highlight thepros and cons of the current system and help give guidelines on how to create futureVR platforms. During the test period, 58 people participated who did specific tasks onthe platform and filled out the quantitative data gathering survey, based on the UserExperience Questionnaire (UEQ) test.After comparing the two user bases, the result shows that there is no significantdifference between using VRChat either way allowing both user bases to enjoy theplatform equally. However, because of these results, the work failed to highlight whataspects of a VR platform need to be changed to fit the needs of VR headset users. Itimplies the need for further research and experimenting with the medium. In the future,further research, testing and experimentation are needed to improve the current designmodels and make the systems more pleasant for VR headset users.
|
48 |
Rapid Design and Prototyping Methods for Mobile Head-Worn Mixed Reality (MR) Interface and Interaction SystemsRedfearn, Brady Edwin 09 February 2018 (has links)
As Mixed Reality (MR) technologies become more prevalent, it is important for researchers to design and prototype the kinds of user interface and user interactions that are most effective for end-user consumers. Creating these standards now will aid in technology development and adoption in MR overall. In the current climate of this domain, however, the interface elements and user interaction styles are unique to each hardware and software vendor and are generally proprietary in nature. This results in confusion for consumers.
To explore the MR interface and interaction space, this research employed a series of standard user-centered design (UCD) methods to rapidly prototype 3D head-worn display (HWD) systems in the first responder domain. These methods were performed across a series of 13 experiments, resulting in an in-depth analysis of the most effective methods experienced herein and providing suggested paths forward for future researchers in 3D MR HWD systems.
Lessons learned from each individual method and across all of the experiments are shared. Several characteristics are defined and described as they relate to each experiment, including interface, interaction, and cost. / Ph. D. / Trends in technology development have shown that the inclusion of virtualized objects and worlds will become more popular in both professional workflows and personal entertainment. As these synthetic objects become easier to build and deploy in consumer devices, it will become increasingly important for a set of standard information elements (e.g., the “save” operation disk icon in desktop software) and user interaction motifs (e.g., “pinch and zoom” on touch screen interfaces) to be deployed in these types of futuristic technologies.
This research effort explores a series of rapid design and prototype methods that inform how a selection of common interface elements in the first responder domain should be communicated to the user. It also explores how users in this domain prefer to interact with futuristic technology systems. The results from this study are analyzed across a series of characteristics and suggestions are made on the most effective methods and experiments that should be used by future researchers in this domain.
|
49 |
A Multi-Modal, Modified-Feedback and Self-Paced Brain-Computer Interface (BCI) to Control an Embodied Avatar's GaitAlchalabi, Bilal 12 1900 (has links)
Brain-computer interfaces (BCI) have been used to control the gait of a virtual self-avatar with the
aim of being used in gait rehabilitation. A BCI decodes the brain signals representing a desire to
do something and transforms them into a control command for controlling external devices.
The feelings described by the participants when they control a self-avatar in an immersive virtual
environment (VE) demonstrate that humans can be embodied in the surrogate body of an avatar
(ownership illusion). It has recently been shown that inducing the ownership illusion and then
manipulating the movements of one’s self-avatar can lead to compensatory motor control
strategies.
In order to maximize this effect, there is a need for a method that measures and monitors
embodiment levels of participants immersed in virtual reality (VR) to induce and maintain a strong
ownership illusion. This is particularly true given that reaching a high level of both BCI
performance and embodiment are inter-connected. To reach one of them, the second must be
reached as well. Some limitations of many existing systems hinder their adoption for
neurorehabilitation: 1- some use motor imagery (MI) of movements other than gait; 2- most
systems allow the user to take single steps or to walk but do not allow both, which prevents users
from progressing from steps to gait; 3- most of them function in a single BCI mode (cue-paced or
self-paced), which prevents users from progressing from machine-dependent to machine-independent
walking. Overcoming the aforementioned limitations can be done by combining
different control modes and options in one single system. However, this would have a negative
impact on BCI performance, therefore diminishing its usefulness as a potential rehabilitation tool.
In this case, there will be a need to enhance BCI performance. For such purpose, many techniques
have been used in the literature, such as providing modified feedback (whereby the presented
feedback is not consistent with the user’s MI), sequential training (recalibrating the classifier as
more data becomes available).
This thesis was developed over 3 studies. The objective in study 1 was to investigate the possibility
of measuring the level of embodiment of an immersive self-avatar, during the performing,
observing and imagining of gait, using electroencephalogram (EEG) techniques, by presenting
visual feedback that conflicts with the desired movement of embodied participants.
The objective of study 2 was to develop and validate a BCI to control single steps and forward
walking of an immersive virtual reality (VR) self-avatar, using mental imagery of these actions, in
cue-paced and self-paced modes. Different performance enhancement strategies were
implemented to increase BCI performance.
The data of these two studies were then used in study 3 to construct a generic classifier that could
eliminate offline calibration for future users and shorten training time.
Twenty different healthy participants took part in studies 1 and 2. In study 1, participants wore an
EEG cap and motion capture markers, with an avatar displayed in a head-mounted display (HMD)
from a first-person perspective (1PP). They were cued to either perform, watch or imagine a single
step forward or to initiate walking on a treadmill. For some of the trials, the avatar took a step with
the contralateral limb or stopped walking before the participant stopped (modified feedback).
In study 2, participants completed a 4-day sequential training to control the gait of an avatar in
both BCI modes. In cue-paced mode, they were cued to imagine a single step forward, using their
right or left foot, or to walk forward. In the self-paced mode, they were instructed to reach a target
using the MI of multiple steps (switch control mode) or maintaining the MI of forward walking
(continuous control mode). The avatar moved as a response to two calibrated regularized linear
discriminant analysis (RLDA) classifiers that used the μ power spectral density (PSD) over the
foot area of the motor cortex as features. The classifiers were retrained after every session. During
the training, and for some of the trials, positive modified feedback was presented to half of the
participants, where the avatar moved correctly regardless of the participant’s real performance.
In both studies, the participants’ subjective experience was analyzed using a questionnaire. Results
of study 1 show that subjective levels of embodiment correlate strongly with the power differences
of the event-related synchronization (ERS) within the μ frequency band, and over the motor and
pre-motor cortices between the modified and regular feedback trials.
Results of study 2 show that all participants were able to operate the cued-paced BCI and the selfpaced
BCI in both modes. For the cue-paced BCI, the average offline performance (classification
rate) on day 1 was 67±6.1% and 86±6.1% on day 3, showing that the recalibration of the classifiers
enhanced the offline performance of the BCI (p < 0.01). The average online performance was
85.9±8.4% for the modified feedback group (77-97%) versus 75% for the non-modified feedback
group. For self-paced BCI, the average performance was 83% at switch control and 92% at
continuous control mode, with a maximum of 12 seconds of control. Modified feedback enhanced
BCI performances (p =0.001). Finally, results of study 3 show that the constructed generic models
performed as well as models obtained from participant-specific offline data. The results show that
there it is possible to design a participant-independent zero-training BCI. / Les interfaces cerveau-ordinateur (ICO) ont été utilisées pour contrôler la marche d'un égo-avatar virtuel dans le but d'être utilisées dans la réadaptation de la marche. Une ICO décode les signaux du cerveau représentant un désir de faire produire un mouvement et les transforme en une commande de contrôle pour contrôler des appareils externes.
Les sentiments décrits par les participants lorsqu'ils contrôlent un égo-avatar dans un environnement virtuel immersif démontrent que les humains peuvent être incarnés dans un corps d'un avatar (illusion de propriété). Il a été récemment démontré que provoquer l’illusion de propriété puis manipuler les mouvements de l’égo-avatar peut conduire à des stratégies de contrôle moteur compensatoire.
Afin de maximiser cet effet, il existe un besoin d'une méthode qui mesure et surveille les niveaux d’incarnation des participants immergés dans la réalité virtuelle (RV) pour induire et maintenir une forte illusion de propriété.
D'autre part, atteindre un niveau élevé de performances (taux de classification) ICO et d’incarnation est interconnecté. Pour atteindre l'un d'eux, le second doit également être atteint. Certaines limitations de plusieurs de ces systèmes entravent leur adoption pour la neuroréhabilitation: 1- certains utilisent l'imagerie motrice (IM) des mouvements autres que la marche; 2- la plupart des systèmes permettent à l'utilisateur de faire des pas simples ou de marcher mais pas les deux, ce qui ne permet pas à un utilisateur de passer des pas à la marche; 3- la plupart fonctionnent en un seul mode d’ICO, rythmé (cue-paced) ou auto-rythmé (self-paced). Surmonter les limitations susmentionnées peut être fait en combinant différents modes et options de commande dans un seul système. Cependant, cela aurait un impact négatif sur les performances de l’ICO, diminuant ainsi son utilité en tant qu'outil potentiel de réhabilitation. Dans ce cas, il sera nécessaire d'améliorer les performances des ICO. À cette fin, de nombreuses techniques ont été utilisées dans la littérature, telles que la rétroaction modifiée, le recalibrage du classificateur et l'utilisation d'un classificateur générique.
Le projet de cette thèse a été réalisé en 3 études, avec objectif d'étudier dans l'étude 1, la possibilité de mesurer le niveau d'incarnation d'un égo-avatar immersif, lors de l'exécution, de l'observation et de l'imagination de la marche, à l'aide des techniques encéphalogramme (EEG), en présentant une rétroaction visuelle qui entre en conflit avec la commande du contrôle moteur des sujets incarnés. L'objectif de l'étude 2 était de développer un BCI pour contrôler les pas et la marche vers l’avant d'un égo-avatar dans la réalité virtuelle immersive, en utilisant l'imagerie motrice de ces actions, dans des modes rythmés et auto-rythmés. Différentes stratégies d'amélioration des performances ont été mises en œuvre pour augmenter la performance (taux de classification) de l’ICO.
Les données de ces deux études ont ensuite été utilisées dans l'étude 3 pour construire des classificateurs génériques qui pourraient éliminer la calibration hors ligne pour les futurs utilisateurs et raccourcir le temps de formation.
Vingt participants sains différents ont participé aux études 1 et 2. Dans l'étude 1, les participants portaient un casque EEG et des marqueurs de capture de mouvement, avec un avatar affiché dans un casque de RV du point de vue de la première personne (1PP). Ils ont été invités à performer, à regarder ou à imaginer un seul pas en avant ou la marche vers l’avant (pour quelques secondes) sur le tapis roulant. Pour certains essais, l'avatar a fait un pas avec le membre controlatéral ou a arrêté de marcher avant que le participant ne s'arrête (rétroaction modifiée).
Dans l'étude 2, les participants ont participé à un entrainement séquentiel de 4 jours pour contrôler la marche d'un avatar dans les deux modes de l’ICO. En mode rythmé, ils ont imaginé un seul pas en avant, en utilisant leur pied droit ou gauche, ou la marche vers l’avant . En mode auto-rythmé, il leur a été demandé d'atteindre une cible en utilisant l'imagerie motrice (IM) de plusieurs pas (mode de contrôle intermittent) ou en maintenir l'IM de marche vers l’avant (mode de contrôle continu). L'avatar s'est déplacé en réponse à deux classificateurs ‘Regularized Linear Discriminant Analysis’ (RLDA) calibrés qui utilisaient comme caractéristiques la densité spectrale de puissance (Power Spectral Density; PSD) des bandes de fréquences µ (8-12 Hz) sur la zone du pied du cortex moteur. Les classificateurs ont été recalibrés après chaque session. Au cours de l’entrainement et pour certains des essais, une rétroaction modifiée positive a été présentée à la moitié des participants, où l'avatar s'est déplacé correctement quelle que soit la performance réelle du participant. Dans les deux études, l'expérience subjective des participants a été analysée à l'aide d'un questionnaire.
Les résultats de l'étude 1 montrent que les niveaux subjectifs d’incarnation sont fortement corrélés à la différence de la puissance de la synchronisation liée à l’événement (Event-Related Synchronization; ERS) sur la bande de fréquence μ et sur le cortex moteur et prémoteur entre les essais de rétroaction modifiés et réguliers. L'étude 2 a montré que tous les participants étaient capables d’utiliser le BCI rythmé et auto-rythmé dans les deux modes. Pour le BCI rythmé, la performance hors ligne moyenne au jour 1 était de 67±6,1% et 86±6,1% au jour 3, ce qui montre que le recalibrage des classificateurs a amélioré la performance hors ligne du BCI (p <0,01). La performance en ligne moyenne était de 85,9±8,4% pour le groupe de rétroaction modifié (77-97%) contre 75% pour le groupe de rétroaction non modifié. Pour le BCI auto-rythmé, la performance moyenne était de 83% en commande de commutateur et de 92% en mode de commande continue, avec un maximum de 12 secondes de commande. Les performances de l’ICO ont été améliorées par la rétroaction modifiée (p = 0,001). Enfin, les résultats de l'étude 3 montrent que pour la classification des initialisations des pas et de la marche, il a été possible de construire des modèles génériques à partir de données hors ligne spécifiques aux participants. Les résultats montrent la possibilité de concevoir une ICO ne nécessitant aucun entraînement spécifique au participant.
|
50 |
Mitigating VR Cybersickness Caused by Continuous Joystick MovementAditya Ajay Oka (16529664) 13 July 2023 (has links)
<p>When users begin to experience virtual reality (VR) for the first time, they can be met with some degree of motion sickness and nausea, especially if continuous joystick locomotion is used. The symptoms that are induced during these VR experiences fall under the umbrella term cybersickness, and due to these uncomfortable experiences, these users can get a bad first impression and abandon the innovative technology, not able to fully appreciate the convenience and fascinating adventures VR has to offer. As such, this project compares the effects of two cybersickness mitigation methods (Dynamic Field of View (FOV) and Virtual Reference Frame), both against each other and combined, on user-reported cybersickness symptoms to determine the best combination to implement in commercial applications to help create more user-friendly VR experiences. The hypothesis is that combining the FOV reduction and the resting frame methods can mitigate VR cybersickness more effectively without hindering the user’s experience and the virtual nose method is more potent at mitigating cybersickness compared to dynamic FOV. To test these hypotheses, an experimental game was developed for the Meta Quest 2 with five levels: a tutorial level and four maze levels (one for each scenario). The participants were asked to complete the tutorial level until they got used to the virtual reality controls, and then they were instructed to complete the maze level twice with one of the following conditions for each run: no method, dynamic field of view only, virtual nose only, and dynamic field of view and virtual nose combined. After completing each maze trial, the participants were asked to complete a simulator sickness questionnaire to get their thoughts on how much sickness they felt during the test. Upon concluding the testing phase with 36 participants and compiling the data, the results showed that while the subjects preferred the dynamic FOV method even though they were able to complete the trials significantly faster with the virtual nose method, it is inconclusive regarding which method is truly more effective. Furthermore, the results showed that it is also inconclusive if the scenario with both methods enabled is significantly better or worse than either method used separately.</p>
|
Page generated in 0.0953 seconds