• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 522
  • 107
  • 87
  • 38
  • 36
  • 34
  • 19
  • 15
  • 7
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 1012
  • 1012
  • 294
  • 203
  • 186
  • 154
  • 151
  • 140
  • 128
  • 125
  • 117
  • 100
  • 99
  • 96
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

La réalité augmentée : fusion de vision et navigation / Augmented reality : the fusion of vision and navigation

Zarrouati-Vissière, Nadège 20 December 2013 (has links)
Cette thèse a pour objet l'étude d'algorithmes pour des applications de réalité visuellement augmentée. Plusieurs besoins existent pour de telles applications, qui sont traités en tenant compte de la contrainte d'indistinguabilité de la profondeur et du mouvement linéaire dans le cas de l'utilisation de systèmes monoculaires. Pour insérer en temps réel de manière réaliste des objets virtuels dans des images acquises dans un environnement arbitraire et inconnu, il est non seulement nécessaire d'avoir une perception 3D de cet environnement à chaque instant, mais également d'y localiser précisément la caméra. Pour le premier besoin, on fait l'hypothèse d'une dynamique de la caméra connue, pour le second on suppose que la profondeur est donnée en entrée: ces deux hypothèses sont réalisables en pratique. Les deux problèmes sont posés dans lecontexte d'un modèle de caméra sphérique, ce qui permet d'obtenir des équations de mouvement invariantes par rotation pour l'intensité lumineuse comme pour la profondeur. L'observabilité théorique de ces problèmes est étudiée à l'aide d'outils de géométrie différentielle sur la sphère unité Riemanienne. Une implémentation pratique est présentée: les résultats expérimentauxmontrent qu'il est possible de localiser une caméra dans un environnement inconnu tout en cartographiant précisément cet environnement. / The purpose of this thesis is to study algorithms for visual augmented reality. Different requirements of such an application are addressed, with the constraint that the use of a monocular system makes depth and linear motion indistinguishable. The real-time realistic insertion of virtual objects in images of a real arbitrary environment yields the need for a dense Threedimensional (3D) perception of this environment on one hand, and a precise localization of the camera on the other hand. The first requirement is studied under an assumption of known dynamics, and the second under the assumption of known depth: both assumptions are practically realizable. Both problems are posed in the context of a spherical camera model, which yields SO(3)-invariant dynamical equations for light intensity and depth. The study of theoreticalobservability requires differential geometry tools for the Riemannian unit sphere. Practical implementation on a system is presented and experimental results demonstrate the ability to localize a camera in a unknown environment while precisely mapping this environment.
652

Augmented Reality for Product Packaging : An Android Augmented Reality App

Nikobonyadrad, Sam January 2012 (has links)
Augmented Reality for smartphones, while still in its initial stages, has a great potential in relation to the future path of mobile marketing and has already shown significant market presence thus far. However, Augmented Reality is an almost new concept, but its basis and techniques have been used for years. By generating enthusiasm in the retail market, Augmented Reality presents many opportunities. Simulating virtual interaction in real-time for an unknown product, encourages customers to experience an advertisement. The sense enhancement that Augmented Reality provides over a real-world environment, might be either the result of the device's location or the environmental images surrounding the device. The latter is called vision Augmented Reality. This study aims to develop a vision-based Augmented Reality application for Android platforms. The idea is based on a proposal offered by a ProductPackaging company, which would like to develop a smartphone application in order to provide shoppers an idea regarding what is inside the package. However, this is only one of the numerous advantages that AR brings and the benefits of this technology appears to be almost limitless in relation toincreasing productivity for customers. Once the goal has been achieved, the application can be used to provide relevant information about the product suchas physical specification, ingredients, animated instruction manual, repair wizard and so on. The main focus of the entire implementation is on integrating an existing ARSDK and a Java rendering library so that they can cooperate together. In addition, the fundamentals associated with the Image Registration process, which is the basis of Augmented Reality, are addressed. Both the advantages and drawbacks of the implementation model are discussed in this paper as arethe problematic issues surrounding the execution steps.
653

Ger interaktion genom rörelse högre engagemang? : En studie av två olika zoom-tekniker inom mobil AR / Does movement in Interaction give a higher Engagement? : A study of two different Zoom-Techniques in Mobile Augmented Reality

Holm, Anna January 2012 (has links)
I den här uppsatsen presenteras en studie av rörelsebaserad zoom (Device Movement Based Zoom) och nyp-zoomning (Pinch-zoom). Rörelsebaserad zoom innebär att användaren zoomar in genom att gå närmare ett objekt och zoomar ut genom att gå längre bort från det. Nyp-zoomning innebär att användaren zoomar genom nyp-gester med två fingrar på plattformens pekskärm. Syftet var att undersöka vilka likheter och skillnader som finns mellan de båda systemen ur ett användarperspektiv samt om engagemanget blir större när rörelserna blir större och användaren tvingas vara mer aktiv (med kroppen) vid interaktionen med systemet. De 24 deltagarna i studien testade två olika system, ett med vardera typen av zoom, och både kvalitativ och kvantitativ data samlades in genom enkäter. Resultatet visade att systemen hade nästan lika hög popularitet (på frågan; vilken av de två systemen skulle du föredra?). Däremot var svaren på frågan influerade av vilken telefon de hade till vardags. De som var vana att använda en telefon med knappar för navigering föredrog rörelsebaserad zoom i större utsträckning och de som var vana att använda pekskärm föredrog nyp-zoom i större utsträckning. Samma tendens syntes genomgående i all insamlad data. En tendens till att systemet med rörelsebaserad zoom gav högre engagemang hos användarna gick att se, men inga signifikanta skillnader fanns för hela gruppen. Tendensen var störst för delkategorierna upplevt engagemang och stabilitet. När deltagarna delades upp efter vilken mobiltelefon de var vana att använda syntes tendensen att de båda systemen ansågs mer likvärdiga i självskattat engagemang för de som sen tidigare var vana att använda nyp-zoom. De som inte var vana att använda nyp-zoom (deltagarna som använde knapptelefoner till vardags) märkte en större skillnad på systemen. Signifikanta skillnader sågs här under det totala engagemanget samt för delkategorierna stabilitet och upplevt engagemang. Värt att notera är att gruppstorleken var väldigt ojämn och antalet deltagare som var vana att använda en knapptelefon var väldigt lågt. Rörelsebaserad zoom upplevdes som naturlig och mer fri, medan nyp-zoom är smidigt att använda i vissa situationer då det inte kräver någon plats att gå omkring. En annan fördelmed nyp-zoom är att 3d-modellerna inte försvinner som de kan göra med den rörelsebaserade zoomen eftersom hela markören behöver vara i bild i enhetens kamera för att 3d-modellen ska visas. Däremot verkar nyp-zoom vara relativt svårt att använda för de som inte är vana vilket också syntes i de kvalitativa data. För att kunna säga mer hur väl resultatet kan generaliseras, till exempel med fokus på kroppslig interaktion och om något av sätten att zooma är att föredra över lag uppmuntras fler liknande studier.
654

Automatic localization of endoscope in intraoperative CT image : a simple approach to augmented reality guidance in laparoscopic surgery / Localisation automatique de l'endoscope dans une image CT intraopératoire : une approche simple du guidage par réalité augmentée en chirurgie laparoscopique

Bernhardt, Sylvain 25 February 2016 (has links)
Au cours des dernières décennies, la chirurgie mini invasive a progressivement gagné en popularité face à la chirurgie ouverte, grâce à de meilleurs bénéfices cliniques. Cependant, ce type d'intervention introduit une perte de vision directe sur la scène pour le chirurgien. L'introduction de la réalité augmentée en chirurgie mini invasive semble être une solution viable afin de remédier à ce problème et a donc été activement considérée par la recherche. Néanmoins, augmenter correctement une scène laparoscopique reste difficile à cause de la non-rigidité des tissus et organes abdominaux. En conséquence, la littérature ne fournit pas d'approche satisfaisante à la réalité augmentée en laparoscopie, car de telles méthodes manquent de précision ou requièrent un équipement supplémentaire, contraignant et onéreux. Dans ce contexte, nous présentons un nouveau paradigme à la réalité augmentée en chirurgie laparoscopique. Se reposant uniquement sur l'équipement standard d'une salle opératoire hybride, notre approche peut fournir la relation statique entre l'endoscope et un scan intraopératoire 3D. De nombreuses expériences sur un motif radio-opaque montrent quantitativement que nos augmentations sont exactes à moins d'un millimètre près. Des tests sur des données in vivo consolident la démonstration du potentiel clinique de notre approche dans plusieurs cas chirurgicaux réalistes. / Over the past decades, minimally invasive surgery has progressively become more popular than open surgery thanks to greater clinical benefits. However, this kind of intervention introduced a loss of direct vision upon the scene for the surgeon. Introducing augmented reality to minimally invasive surgery appears to be a viable solution to alleviate this drawback and has thus been an attractive topic for the research community. Yet, correctly augmenting a laparoscopic scene remains challenging, due to the non-rigidity of abdominal tissues and organs. Therefore, the literature does not report a satisfactory approach to laparoscopic augmented reality, as such methods lack accuracy or require expensive and impractical additional equipment. In light of this, we present a novel paradigm to augmented reality in abdominal minimally invasive surgery. Based only on standard hybrid operating room equipment, our approach can provide the static relationship between the endoscope and an intraoperative 3D scan. Extensive experiments on a radio-opaque pattern quantitatively show that the accuracy of our augmentations is less than one millimeter. Tests on in vivo data further demonstrates the clinical potential of our approach in several realistic surgical cases.
655

Augmented reality based user interface for mobile applications and services

Antoniac, P. (Peter) 07 June 2005 (has links)
Abstract Traditional design of user interfaces for mobile phones is limited to a small interaction that provides only the necessary means to place phone calls or to write short messages. Such narrow activities supported via current terminals suppress users from moving towards mobile and ubiquitous computing environments of the future. Unfortunately, the next generation of user interfaces for mobile terminals seems to apply the same design patterns as commonly used for desktop computers. Whereas the desktop environment has enough resources to implement such design, capabilities of the mobile terminals fall under constraints dictated by mobility, like the size and weight. Additionally, to make mobile terminals available for everyone, users should be able to operate them with minimal or no preparation, while users of desktop computers will require certain degree of training. This research looks into how to improve the user interface of future mobile devices by using a more human-centred design. One possible solution is to combine the Augmented Reality technique with image recognition in such a way that it will allow the user to access a "virtualized interface". Such an interface is feasible since the user of an Augmented Reality system is able to see synthetic objects overlaying the real world. Overlaying the user's sight and using the image recognition process, the user interacts with the system using a combination of virtual buttons and hand gestures. The major contribution of this work is the definition of the user's gestures that makes it possible for human-computer interaction with such Augmented Reality based User Interfaces. Another important contribution is the evaluation on how mobile applications and services work with this kind of user interface and whether the technology is available to support it.
656

Grenzgänger

Tallig, Anke 05 November 2012 (has links) (PDF)
Im Rahmen des DFG-Graduiertenkollegs „CrossWorlds“ (http://www.crossworlds.info) wird im Teilbereich Kommunikation ein autonomer mobiler Roboter aufgebaut. Das Anwendungsszenario ist das Industriemuseum Chemnitz. Mithilfe von technischen Mitteln werden die Exponate des Industriemuseums durch die Informationen und Darstellungen der virtuellen Welt erweitert. Der Roboter CLUES (Cross worLds autonomoUs mobilE robot hoSt) tritt dabei als Mittler zwischen der realen Museumswelt und virtuellen Informationswelt auf. Er bietet den Besuchern die Möglichkeit zur Interaktion mit den bereitgestellten Inhalten und nimmt gleichzeitig als Gastgeber des Museums die Besucher wahr. Dieser erste Bericht beinhaltet die Grundidee des Projektes und die Hardwareausrüstung. Er schildert die Auswahl und Abstimmung der eingesetzten technischen Geräte. Mit Blick auf das Anwendungsszenario und die technischen Möglichkeiten werden die auftretenden Probleme und Lösungen diskutiert.
657

Techniques d’interaction, affichage personnalisé et reconstruction de surfaces pour la réalité augmentée spatiale / Interaction techniques, personalized experience and surface reconstruction for spatial augmented reality

Ridel, Brett 17 October 2016 (has links)
Cette thèse s’inscrit dans le thème de la réalité augmentée spatiale (RAS). La RAS permet d’améliorer ou de modifier la perception de la réalité, au travers d’informations virtuelles affichées directement dans l’environnement réel, à l’aide d’un vidéoprojecteur. Bien des domaines peuvent en profiter, tels que le tourisme, le divertissement, l’éducation, la médecine, l’industrie ou le patrimoine culturel. Les techniques informatiques actuelles permettent d’acquérir, d’analyser et de visualiser la géométrie de la surface d’objets réels, comme par exemple pour des artefacts archéologiques. Nous proposons une technique d’interaction et de visualisation en RAS qui combine les avantages de l’étude d’artefacts archéologiques réels et de l’étude d’artefacts archéologiques 3D. Pour cela, nous superposons sur l’objet une visualisation expressive en RAS basée sur les courbures, permettant par exemple de montrer le détail des gravures. Nous simulons ensuite l’utilisation d’une lampe torche grâce à un interacteur à 6 degrés de liberté. L’utilisateur peut ainsi spécifie la zone de l’objet à augmenter et ajuster les nombreux paramètres nécessaires au rendu expressif. L’une des principales caractéristiques de la réalité augmentée spatiale est de permettre à plusieurs utilisateurs de participer simultanément à la même expérience. Cependant, en fonction de l’application souhaitée, cela peut être vu comme un inconvénient. Nous proposons un nouveau dispositif d’affichage permettant de créer des expériences en RAS qui soient multi-utilisateurs et personnalisées, en prenant en compte le point de vue de l’utilisateur. Nous utilisons pour cela un support de projection rétroréfléchissant semi-transparent que l’on positionne en face de l’objet à augmenter. Nous proposons deux implémentations différentes de ce nouveau dispositif, ainsi que deux scénarios d’application. Lorsque l’on veut augmenter des objets déformables, la plupart des techniques de tracking actuelles et la connaissance préalable de la géométrie de l’objet ne suffisent plus. En vue d’être par la suite utilisée pour augmenter un objet, nous proposons une technique de reconstruction de surfaces développables par approximation de cylindres paraboliques, basée sur les MLS. Ce type de surface peut représenter par exemple des vêtements ou des tissus. Nous proposons une solution pour supprimer les problèmes d’approximation dans les zones à forte ambiguïté. / This thesis extends the field of spatial augmented reality (SAR). Spatial augmented reality allows to improve or modify the perception of the reality with virtual information displayed directly in the real world, using video-projection. Many fields such as tourism, entertainment, education, medicine, industry or cultural heritage may benefit from it. Recent computer science techniques allow to measure, analyse and visualise the geometry of the surface of real objects, as for instance archeological artefacts. We propose a SAR interaction and visualisation technique that combines the advantages of the study of both real and 3D archeological artefacts. Thus, we superimpose on the object an expressive rendering based on curvatures with SAR, allowing for example to show details of engravings. Next, we simulate the use of a flashlight with the help of a 6-degree-of-freedom controller. The user can then specify the area on the object to be augmented and adjust the various necessary parameters of the expressive rendering. One of the main caracteristics of SAR is to enable multiple users to simultaneously participate to the same experience. However, depending on the target application, this can be seen as a drawback. We propose a new display device that allows to create experiences in SAR that are both multi-user and personalised by taking into account the user point of view. In order to do so, the projection display, set in front of the object to augment, is made from a material that is both retro-reflective and semi-transparent. We suggest two different uses of this new device, as well as two scenarios of application. Most of recent tracking solutions, even with the knowledge of the geometry of the object beforehand, fail when applied to the augmentation of deformable objects. In order to adress this issue, we propose a reconstruction technique for developable surfaces using parabolic-cylinders based on MLS. This kind of surface may represent cloth or fabric for instance. We show a solution addressing approximation issues in areas where the problem becomes ambiguous.
658

Možnosti využití mobilních technologií a rozšířené reality v destinačním marketingu / Possibilities of application of mobile technology and augmented reality in destination marketing

Štěpánová, Ludmila January 2014 (has links)
ŠTĚPÁNOVÁ LUDMILA: Possibilities of application of mobile technology and augmented reality in destination marketing. Masters thesis. University of Economics, Prague. Department of Tourism. Thesis supervisor: Ing. Martin Vaško. Grade of qualification: Masters degree. Prague 2016. 80 pages. Masters thesis focuses on new trends in development of mobile technologies and their application in tourism. The objective of this thesis is to find out possibilities of application of mobile technologies, advergaming, augmented and virtual reality in destinations marketing activities. The thesis is also looking into activities of CzechTourism agency in this field.
659

SelfMakeup: um sistema de realidade aumentada para autoaplicação de maquiagem virtual / SelfMakeup: an augmented reality system for virtual self-makeup

Aline de Fátima Soares Borges 23 November 2017 (has links)
Durante séculos, cosméticos têm sido utilizados nas mais diversas sociedades. Entretanto, quando se trata de maquiagem facial, o processo de escolha de um produto ainda é um desafio, pois é um trabalho manual que demanda tempo, além de consumir a maquiagem em si e outros materiais para aplicação e limpeza. Esse processo manual também dificulta a experimentação de vários produtos diferentes devido à necessidade de limpeza da pele para retirada de um produto aplicado anteriormente. Assim, um sistema de simulação de maquiagem utilizando realidade aumentada pode facilitar esse processo, permitindo a experimentação com a combinação de produtos e a comparação dos resultados, além de permitir experimentar os produtos virtualmente, pela internet por exemplo. Trabalhos existentes sobre esse tema permitem ao usuário aplicar a maquiagem sobre uma foto, ou mesmo um vídeo, do próprio usuário. A interação é feita por meio de mouse ou toque de um dedo sobre um monitor sensível a toques como se o usuário aplicasse maquiagem em uma terceira pessoa. Nesta dissertação propomos o desenvolvimento do SelfMakeup, um sistema de realidade aumentada que permite a autoaplicação de maquiagem virtual por meio de toques feitos diretamente na face ao invés de toques no monitor. A nossa hipótese é que essa forma de interação seja mais natural e forneça ao usuário uma melhor experiência ao testar produtos virtuais de maquiagem. O primeiro passo para viabilizar o SelfMakeup foi o desenvolvimento de um método para estimar a po- sição de toques na face utilizando uma câmera RGBD. Realizamos testes para avaliar o desempenho desse método e verificamos que a sua acurácia e precisão se mostraram adequadas para o propósito desta pesquisa. Em seguida, projetamos a interface gráfica do sistema para aplicação de maquiagem virtual. A interface per- mite efeitos de destaque e sombreamento que simulam os efeitos provocados pela aplicação de produtos reais de maquiagem. Resultados de um teste piloto do nosso protótipo com 32 usuários sugerem que o SelfMa- keup, por utilizar toques diretamente na face, oferece uma melhor experiência ao usuário na experimentação de produtos virtuais de maquiagem. / For centuries, cosmetics have been used in the most diverse societies. However, when it comes to facial makeup, the process of choosing a product is still a challenge because it is a manual work that takes time, to consumes the makeup itself and other materials for application and cleaning. This manual process also makes it difficult to experiment a number of different products due to the need to clean the face to remove a previously applied product. Thus, a makeup simulation system using augmented reality can facilitate this process, allowing experimentation with the combination of products and the comparison of the results, as well as allowing to experience the products virtually and remotely, through the internet for example. Exis- ting works on this theme allow the user to apply the makeup on a photo or even a video of the user himself by means of a mouse or touch of a finger on a touch-sensitive monitor, as if the user applied makeup on a third person. In this dissertation we propose the development of SelfMakeup, an augmented reality system that allows the self-application of virtual makeup by means of touches made directly on the face, rather than touches on the monitor. Our hypothesis is that this interaction form is more natural and gives the user a better experience when testing virtual make-up products. The first step in enabling SelfMakeup was the develop- ment of a method to estimate the applicator touch position on the face using an RGBD camera. We performed tests to evaluate the performance of this method and verified that its accuracy and precision was adequate for the purpose of this research. Next, we designed the graphical interface of the system for applying vir- tual makeup. The interface allows highlighting and shading effects that simulate the effects of real makeup products. Results from a pilot experiment of our prototype with 32 volunteers suggest that SelfMakeup, by using touches directly on the face, provides a better user experience to try on virtual makeup products.
660

Immersives Design von Architektur durch Kombination interaktiver Displays mit Augmented Reality

Engert, Severin 01 February 2021 (has links)
In der Architektur werden Gebäude entsprechend den Wünschen eines Kunden entworfen. Dabei entstehen neben den detaillierten Grundrissen heutzutage auch komplexe dreidimensionale Modelle. Daraus werden fotorealistische Ansichten des entworfenen Gebäudes erstellt. Diese helfen den Personen ohne Domänenwissen, wie beispielsweise den Kunden oder der Öffentlichkeit, einen besseren Eindruck vom Entwurf zu bekommen und fördern das Verständnis räumlicher Abhängigkeiten. Auch immersive Präsentationen in den Bereichen Augmented und Virtual Reality hielten in der Architektur Einzug. Zu Beginn dieser Arbeit werden zunächst verwandte Forschungen aus den Bereichen Architektur, Visualisierung und Interaktion analysiert. Dabei gewonnene Erkenntnisse fließen in die Entwicklung von Konzepten für eine Anwendung ein, welche auf die Kombination einer AR-Brille mit einem berührungsempfindlichen Display setzt. Ausgewählte Konzepte werden in einer prototypischen Anwendung umgesetzt. Diese Implementierung dient der Bewertung von kopfgebundener Augmented Reality in Kombination mit einem zusätzlichen Display für die Präsentation von architektonischen Entwürfen.:1 Einleitung 1.1 Ziel 1.2 Struktur dieser Arbeit 2 Verwandte Forschung 2.1 Architektur 2.2 Visualisierung in VR und AR 2.2.1 Virtual Reality 2.2.2 Augmented Reality 2.3 Interaktion mit dreidimensionalen Inhalten 2.4 Sonstige verwandte Forschung 2.5 Zusammenfassung 3 Konzepte 3.1 Modell-Präsentation 3.2 Modell-Interaktion 3.3 Modell-Manipulation 3.3.1 Materialien 3.3.2 Inneneinrichtung 3.4 Innenansicht 3.5 Mehrbenutzer-Betrieb 3.6 Alternative Darstellungsarten 3.7 Alternative Interaktionsmöglichkeiten 3.8 Zusammenfassung 4 Umsetzung 4.1 Hardware 4.2 Importe 4.3 Umgesetzte Konzepte 4.4 Prototyp 4.5 Performanz 4.6 Zusammenfassung 5 Evaluation und Ausblick 5.1 Evaluation 5.2 Ausblick 5.3 Zusammenfassung Literatur / Architects create buildings based on client requests. Nowadays they not only design detailed floor plans but also very complex three-dimensional CAD-models. Photorealistic views of the building are computed from these models. They can help persons without special knowledge to understand the spatial relations of the draft. In the last years immersive presentations in Augmented and Virtual Reality came up in architecture. At the beginning of this work related research in the fields of architecture, visualization and interaction will by examined. Thereby gained knowledge will be transfered into the development of concepts for a new application which combines an AR Head Mounted Display with a multi-touch capable display. A chosen subset of concepts will be implemented in a prototyp. This implementation is then used to evaluate the combination of AR with an additional display for the presentation of architectural drafts.:1 Einleitung 1.1 Ziel 1.2 Struktur dieser Arbeit 2 Verwandte Forschung 2.1 Architektur 2.2 Visualisierung in VR und AR 2.2.1 Virtual Reality 2.2.2 Augmented Reality 2.3 Interaktion mit dreidimensionalen Inhalten 2.4 Sonstige verwandte Forschung 2.5 Zusammenfassung 3 Konzepte 3.1 Modell-Präsentation 3.2 Modell-Interaktion 3.3 Modell-Manipulation 3.3.1 Materialien 3.3.2 Inneneinrichtung 3.4 Innenansicht 3.5 Mehrbenutzer-Betrieb 3.6 Alternative Darstellungsarten 3.7 Alternative Interaktionsmöglichkeiten 3.8 Zusammenfassung 4 Umsetzung 4.1 Hardware 4.2 Importe 4.3 Umgesetzte Konzepte 4.4 Prototyp 4.5 Performanz 4.6 Zusammenfassung 5 Evaluation und Ausblick 5.1 Evaluation 5.2 Ausblick 5.3 Zusammenfassung Literatur

Page generated in 0.0712 seconds