• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 17
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Software architectures for photorealistic rendering

Plataniotis, Antonis C. January 1998 (has links)
No description available.
2

Warehouse3D : A graphical data visualization tool

Bengtsson, Christoffer, Hemström, Roger January 2011 (has links)
Automated warehouses are frequently used within the industry. SQL databases are often used for storing various kinds of information about stored items, including their physical positions in the warehouse with respect to X, Y and Z positions. Benefits of this includes savings in working time, optimization of storage capability and – most of all – increased employee safety. IT services company Sogeti’s office in Karlstad has been looking into a project on behalf of one of their customers to implement this kind of automated warehouse. In the pilot study of this project, ideas of a three-dimensional graphic visualization of the warehouse and its stored contents have come up. This kind of tool would give a warehouse operator a clear overview of what is currently in store, as well as quick access to various pieces of information about each and every item in store. Also, in a wider perspective, other types of warehouses and storage areas could benefit from this kind of tool. During the course of this project, a graphical visualization tool for this purpose was developed, resulting in a product that met a significant part of the initial requirements.
3

Facing experience : a painter's canvas in virtual reality

Dolinsky, Margaret January 2014 (has links)
This research investigates how shifts in perception might be brought about through the development of visual imagery created by the use of virtual environment technology. Through a discussion of historical uses of immersion in art, this thesis will explore how immersion functions and why immersion has been a goal for artists throughout history. It begins with a discussion of ancient cave drawings and the relevance of Plato’s Allegory of the Cave. Next it examines the biological origins of “making special.” The research will discuss how this concept, combined with the ideas of “action” and “reaction,” has reinforced the view that art is fundamentally experiential rather than static. The research emphasizes how present-day virtual environment art, in providing a space that engages visitors in computer graphics, expands on previous immersive artistic practices. The thesis examines the technical context in which the research occurs by briefly describing the use of computer science technologies, the fundamentals of visual arts practices, and the importance of aesthetics in new media and provides a description of my artistic practice. The aim is to investigate how combining these approaches can enhance virtual environments as artworks. The computer science of virtual environments includes both hardware and software programming. The resultant virtual environment experiences are technologically dependent on the types of visual displays being used, including screens and monitors, and their subsequent viewing affordances. Virtual environments fill the field of view and can be experienced with a head mounted display (HMD) or a large screen display. The sense of immersion gained through the experience depends on how tracking devices and related peripheral devices are used to facilitate interaction. The thesis discusses visual arts practices with a focus on how illusions shift our cognition and perception in the visual modalities. This discussion includes how perceptual thinking is the foundation of art experiences, how analogies are the foundation of cognitive experiences and how the two intertwine in art experiences for virtual environments. An examination of the aesthetic strategies used by artists and new media critics are presented to discuss new media art. This thesis investigates the visual elements used in virtual environments and prescribes strategies for creating art for virtual environments. Methods constituting a unique virtual environment practice that focuses on visual analogies are discussed. The artistic practice that is discussed as the basis for this research also concentrates on experiential moments and shifts in perception and cognition and references Douglas Hofstadter, Rudolf Arnheim and John Dewey. iv Virtual environments provide for experiences in which the imagery generated updates in real time. Following an analysis of existing artwork and critical writing relative to the field, the process of inquiry has required the creation of artworks that involve tracking systems, projection displays, sound work, and an understanding of the importance of the visitor. In practice, the research has shown that the visitor should be seen as an interlocutor, interacting from a first-person perspective with virtual environment events, where avatars or other instrumental intermediaries, such as guns, vehicles, or menu systems, do not to occlude the view. The aesthetic outcomes of this research are the result of combining visual analogies, real time interactive animation, and operatic performance in immersive space. The environments designed in this research were informed initially by paintings created with imagery generated in a hypnopompic state or during the moments of transitioning from sleeping to waking. The drawings often emphasize emotional moments as caricatures and/or elements of the face as seen from a number of perspectives simultaneously, in the way of some cartoons, primitive artwork or Cubist imagery. In the imagery, the faces indicate situations, emotions and confrontations which can offer moments of humour and reflective exploration. At times, the faces usurp the space and stand in representation as both face and figure. The power of the placement of the caricatures in the paintings become apparent as the imagery stages the expressive moment. The placement of faces sets the scene, establishes relationships and promotes the honesty and emotions that develop over time as the paintings are scrutinized. The development process of creating virtual environment imagery starts with hand drawn sketches of characters, develops further as paintings on “digital canvas”, are built as animated, three-dimensional models and finally incorporated into a virtual environment. The imagery is generated while drawing, typically with paper and pencil, in a stream of consciousness during the hypnopompic state. This method became an aesthetic strategy for producing a snappy straightforward sketch. The sketches are explored further as they are worked up as paintings. During the painting process, the figures become fleshed out and their placement on the page, in essence brings them to life. These characters inhabit a world that I explore even further by building them into three dimensional models and placing them in computer generated virtual environments. The methodology of developing and placing the faces/figures became an operational strategy for building virtual environments. In order to open up the range of art virtual environments, and develop operational strategies for visitors’ experience, the characters and their facial features are used as navigational strategies, signposts and methods of wayfinding in order to sustain a stream of consciousness type of navigation. Faces and characters were designed to represent those intimate moments of self-reflection and confrontation that occur daily within ourselves and with others. They sought to reflect moments of wonderment, hurt, curiosity and humour that could subsequently be relinquished for more practical or purposeful endeavours. They were intended to create conditions in which visitors might reflect upon their emotional state, v enabling their understanding and trust of their personal space, in which decisions are made and the nature of world is determined. In order to extend the split-second, frozen moment of recognition that a painting affords, the caricatures and their scenes are given new dimensions as they become characters in a performative virtual reality. Emotables, distinct from avatars, are characters confronting visitors in the virtual environment to engage them in an interactive, stream of consciousness, non-linear dialogue. Visitors are also situated with a role in a virtual world, where they were required to adapt to the language of the environment in order to progress through the dynamics of a drama. The research showed that imagery created in a context of whimsy and fantasy could bring ontological meaning and aesthetic experience into the interactive environment, such that emotables or facially expressive computer graphic characters could be seen as another brushstroke in painting a world of virtual reality.
4

Advanced Multi-modal User Interfaces in 3D Computer Graphics and Virtual Reality

Chen, Yenan January 2012 (has links)
Computers are developed continuously to satisfy the human demands, and typical tools used everywhere for ranging from daily life usage to all kinds of research. Virtual Reality (VR), a virtual environment simulated to present physical presence in the real word and imaginary worlds, has been widely applied to simulate the virtual environment. People’s feeling is limited to visual perception when only computers are applied for simulations, since computers are limited to display visualization of data, while human senses include sight, smell, hearing, taste, touch and so on. Other devices can be applied, such as haptics, a device for sense of touch, to enhance the human perception in virtual environment. A good way to apply VR applications is to place them in a virtual display system, a system with multiply tools displays a virtual environment with experiencing different human senses, to enhance the people’s feeling of being immersed in a virtual environment. Such virtual display systems include VR dome, recursive acronym CAVE, VR workbench, VR workstation and so on. Menus with lots of advantages in manipulating applications are common in conventional systems, operating systems or other systems in computers. Normally a system will not be usable without them. Although VR applications are more natural and intuitive, they are much less or not usable without menus. But very few studies have focused on user interfaces in VR. This situation motivates us working further in this area. We want to create two models on different purposes. One is inspired from menus in conventional system and the sense of touch. And the other one is designed based on the spatial presence of VR. The first model is a two-dimensional pie menu in pop-up style with spring force feedback. This model is in a pie shape with eight options on the root menu. And there is a pop-up style hierarchical menu belongs to each option on the root menu. When the haptics device is near an option on the root menu, the spring force will force the haptics device towards to the center of the option and that option will be selected, and then the sub menu with nine options will pop up. The pie shape together with the spring force effect is expected to both increase the speed of selection and decrease the error rate of selection. The other model is a semiautomatic three-dimensional cube menu. This cube menu is designed with a aim to provide a simple, elegant, efficient and accurate user interface approach. This model is designed with four faces, including the front, back, left and right faces of the cube. Each face represents a category and has nine widgets. Users can make selections in different categories. An efficient way to change between categories is to rotate the cube automatically. Thus, a navigable rotation animation system is built and is manipulating the cube rotate horizontally for ninety degrees each time, so one of the faces will always face users. These two models are built under H3DAPI, an open source haptics software development platform with UI toolkit, a user interface toolkit. After the implementation, we made a pilot study, which is a formative study, to evaluate the feasibility of both menus. This pilot study includes a list of tasks for each menu, a questionnaire regards to the menu performance for each subject and a discussion with each subject. Six students participated as test subjects. In the pie menu, most of the subjects feel the spring force guides them to the target option and they can control the haptics device comfortably under such force. In the cube menu, the navigation rotation system works well and the cube rotates accurately and efficiently. The results of the pilot study show the models work as we initially expected. The recorded task completion time for each menu shows that with the same amount of tasks and similar difficulties, subjects spent more time on the cube menu than on the pie menu. This may implicate that pie menu is a faster approach comparing to the cube menu. We further consider that both the pie shape and force feedback may help reducing the selection time. The result for the option selection error rate test on the cube menu may implicates that option selection without any force feedback may also achieve a considerable good effect. Through the answers from the questionnaire for each subject, both menus are comfortable to use and in good control.
5

Implementation of Floating Point CORDIC and its Application in 3D Computer Graphics

Wang, Po-Li 02 July 2002 (has links)
Computer graphics has become one of the important method to display information and has been applied in many applications such as CAD, medical image processing, computer animation, multimedia and virtual reality. These popular applications rely on the low-cost and real-time processing of 3D graphics which become available due to the breakthrough in the hardware design of 3D graphic engine. In this thesis, we implement a CORDIC-based floating-point processor that can compute a wide variety of arithmetic operations and show how it can be applied to the design of 3D engine.
6

Design Tools for Sketching of Dome Productions in Virtual Reality

Kihlström, Andreas January 2018 (has links)
This report presents the problem of designers working on new productions for fulldomes. The back and forth process of moving between a work station and the fulldome is time consuming, a faster alternative would be useful. This thesis presents an option, a virtual reality application where a user can sketch the new environment directly on a virtual representation of a fulldome. The result would then be exported directly to the real fulldome to be displayed. The application is developed using Unreal Engine 4. The virtual dome is constructed using a procedurally generated mesh, with a paintable material assigned to it. All painting functionality is implemented manually, as is all other tools. The final product is fully useable, but requires additional work if it is to be used commercially. Additional features can be added, including certain features discussed that were cut due to time constraints, as well as improvements to existing features. Application stability is currently a concern that needs to be addressed, as well as optimizations to the software.
7

Holoscopic 3D imaging and display technology : camera/processing/display

Swash, Mohammad Rafiq January 2013 (has links)
Holoscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”. While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display. Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation. Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability. Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 × 384 3D-Pixels whereas the traditional spatial resolution is 341 × 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images.
8

Implementace algoritmu Seamless Patches for GPU-Based Terrain Rendering / Seamless Patches for GPU-Based Terrain Rendering Algorithm Implementation

Jozefov, David January 2011 (has links)
This master's thesis deals with terrain rendering using a modern algorithm for adaptive level of detail. It describes two currently most used graphical application interfaces and high-level libraries that use them and summarizes principles and features of several level-of-detail algorithms for terrain rendering. In more detail it then describes the implementation of Seamless patches for GPU-based terrain rendering algorithm.
9

Synthèse géométrique temps réel / Real-time geometry synthesis

Holländer, Matthias 07 March 2013 (has links)
La géométrie numérique en temps réel est un domaîne de recherches émergent en informatique graphique.Pour pouvoir générer des images photo-réalistes de haute définition,beaucoup d'applications requièrent des méthodes souvent prohibitives financièrementet relativement lentes.Parmi ces applications, on peut citer la pré-visualisation d'architectures, la réalisation de films d'animation,la création de publicités ou d'effets spéciaux pour les films dits réalistes.Dans ces cas, il est souvent nécessaire d'utiliser conjointement beaucoup d'ordinateurs possédanteux-mêmes plusieurs unités graphiques (Graphics Processing Units - GPUs).Cependant, certaines applications dites temps-réel ne peuvent s'accomoder de telles techniques, car elles requièrentde pouvoir générer plus de 30 images par seconde pour offrir un confort d'utilisationet une intéraction avec des mondes virtuels 3D riches et réalistes.L'idée principale de cette thèse est d'utiliser la synthèse de géométrie,la géométrie numérique et l'analyse géométrique pourrépondre à des problèmes classiques en informatique graphique,telle que la génération de surfaces de subdivision, l'illumination globaleou encore l'anti-aliasing dans des contextes d'intéraction temps-réel.Nous présentons de nouveaux algorithmes adaptés aux architectures matérielles courantes pour atteindre ce but. / Eal-time geometry synthesis is an emerging topic in computer graphics.Today's interactive 3D applications have to face a variety of challengesto fulfill the consumer's request for more realism and high quality images.Often, visual effects and quality known from offline-rendered feature films or special effects in movie productions are the ultimate goal but hard to achieve in real time.This thesis offers real-time solutions by exploiting the Graphics Processing Unit (GPU)and efficient geometry processing.In particular, a variety of topics related to classical fields in computer graphics such assubdivision surfaces, global illumination and anti-aliasing are discussedand new approaches and techniques are presented.
10

Enhancing Autodesk Maya´s rendering capabilities: : Development and integration of a real-time render plug-in incorporating the extended feature of Toon-Shading

Karlsson, Zannie, Yan, Liye January 2023 (has links)
Background- Autodesk Maya is by its long existence one of the most established 3D-modeling software that enables users to create meshes and the software can handle a majority of processes associated with graphic models, animation, and rendering. Although there are arguably different third-party plug-ins that can be used to enhance the efficiency of Maya. Maya’s own built-in rendering functions, especially its real-time rendering engine feel less efficient than other available real-time rendering options, which additionally commonly provide different rendering techniques that can be used to give a desired style to the modeled scene.  Objectives- Maya in its built-in rendering engines themselves does not offer much in terms of non-realistic rendering techniques; therefore, rendering in, for example, Toon-shading requires more work and effort. The objective is to implement a prototype plug-in to that can do real-time rendering of a realistic as well as non-photorealistic rendering technique inside of Autodesk Maya 2023. Its future aim is to address the non-effective and time-consuming task of viewing the results of light adjustments and setting the scene up for stylized renders in Maya.  Methods- Through the method of implementation, a basic plug-in to Autodesk Maya was constructed in Visual Studio using C++ and DirectX 11 library. It employs Qt-window to render the Maya scene in real-time and, additionally, has the function of Toon-shading. The prototype plug-in is then put through a simple test using manual assessment. The prototype’s visual rendered output, rendering times, processing usage, and memory usage are presented and compared to the results from Maya 2023’s built-in rendering options when rendering a constructed test-scene to find out where the plug-in requires further adjustments to its implementation. Results- The results show that a real-time plug-in with the additional function of Toon-shading was implementedusing the defined method of implementation. From the later test, the prototype’s rendered results arepresented and compared to the results of Autodesk Maya 2023’s built-in rendering options when rendering the constructed test-scene. Conclusion- The prototype by collecting information from the Maya scene and running the same data through the DirectX pipeline allows for different rendering styles to be developed and displayed through the user-friendly graphical user interface developed with the Qt-library. With the press of a button different implemented rendering styles like the one of Toon-shading can be applied to the prototype’s window display of the Maya scene. Its real-time rendering allows the user to see the implemented graphical attributes done to the scene without time delay. Which makes the job of finding the right angle for the intended render more efficient. The intended rendered scene can then easily be saved by the press of another button. The time and workflow no longer require the 3D-model to be imported to another rendering software or to apply different materials to all parts of the different Maya 3D-models when trying to achieve a non-photorealistic rendering style. The implemented prototype is very basic, andmore implementation is required before the prototype can be used as an efficient rendering alternative for stylized rendering in Maya.

Page generated in 0.0721 seconds