1 |
Implementation of SceneServer : a 3D software assisting developers of computer vision algorithmsBennet, Fredrik, Fenelius, Stefan January 2003 (has links)
<p>The purpose behind this thesis is to develop a software (SceneServer) that can generate data such as images and vertex lists from computer models. These models are placed in a virtual environment and they can be controlled either from a graphical user interface (GUI) or from a MATLAB client. Data can be retrieved and processed in MATLAB. By creating a connection between MATLAB and a 3D environment, computer vision algorithms can be designed and tested swiftly, thus giving the developer a powerful platform. SceneServer allows the user to manipulate, in detail, the models and scenes to be rendered. </p><p>MATLAB communicates with the SceneServer application through a Java library, which is connected to an interface in SceneServer. The graphics are visualised using Open Scene Graph (OSG) that in turn uses OpenGL. OSG is an open source cross-platform scene graph library for visualisation of real-time graphics. OpenGL is a software interface for creating advanced computer graphics in 2D and 3D.</p>
|
2 |
Haptic interaction with rigid body objects in a simulated environmentEngström, Per January 2006 (has links)
<p>The purpose of this report is to cover the procedure of creating and explaining how to use a tool kit that allows the haptic Application Programming Interface (API) H3D from SenseGraphics to be used in conjunction with an advanced physics simulator from Meqon. Both haptic applications and physics engines have developed rapidly the last couple of years but they are rarely used together. If such a connection would be created it would be possible to interact with complex environments in a new way and a variety of haptic applications can be produced.</p><p>The physics engine from Meqon has gained recognition for its abilities to produce realistic results due to efficient implementation of collision detection system, friction models and collision handling, among other things. H3D is a completely open source API that is based on standards such as OpenGL and X3D. H3D consists of a data base containing nodes, an XML parser to extract a scene graph from the data base and functionality to produce a graphic and haptic interface.</p><p>The tool kit produced in this thesis is an extension to H3D. A fundamental function of the tool kit is to communicate with the Meqon system and still be a part of the H3D structure. The Meqon system has a modular structure where each module has its own abilities. Only the rigid body module is utilised by the tool kit, which however is the most important module. It is possible to define global settings of the engine and rigid body module, add rigid bodies with several elements and insert constraints on the motion of the rigid bodies into the engine. All of these operations are done from the X3D file format that H3D uses, thus letting all functionality of the H3D system available.</p>
|
3 |
Haptic interaction with rigid body objects in a simulated environmentEngström, Per January 2006 (has links)
The purpose of this report is to cover the procedure of creating and explaining how to use a tool kit that allows the haptic Application Programming Interface (API) H3D from SenseGraphics to be used in conjunction with an advanced physics simulator from Meqon. Both haptic applications and physics engines have developed rapidly the last couple of years but they are rarely used together. If such a connection would be created it would be possible to interact with complex environments in a new way and a variety of haptic applications can be produced. The physics engine from Meqon has gained recognition for its abilities to produce realistic results due to efficient implementation of collision detection system, friction models and collision handling, among other things. H3D is a completely open source API that is based on standards such as OpenGL and X3D. H3D consists of a data base containing nodes, an XML parser to extract a scene graph from the data base and functionality to produce a graphic and haptic interface. The tool kit produced in this thesis is an extension to H3D. A fundamental function of the tool kit is to communicate with the Meqon system and still be a part of the H3D structure. The Meqon system has a modular structure where each module has its own abilities. Only the rigid body module is utilised by the tool kit, which however is the most important module. It is possible to define global settings of the engine and rigid body module, add rigid bodies with several elements and insert constraints on the motion of the rigid bodies into the engine. All of these operations are done from the X3D file format that H3D uses, thus letting all functionality of the H3D system available.
|
4 |
Implementation of SceneServer : a 3D software assisting developers of computer vision algorithmsBennet, Fredrik, Fenelius, Stefan January 2003 (has links)
The purpose behind this thesis is to develop a software (SceneServer) that can generate data such as images and vertex lists from computer models. These models are placed in a virtual environment and they can be controlled either from a graphical user interface (GUI) or from a MATLAB client. Data can be retrieved and processed in MATLAB. By creating a connection between MATLAB and a 3D environment, computer vision algorithms can be designed and tested swiftly, thus giving the developer a powerful platform. SceneServer allows the user to manipulate, in detail, the models and scenes to be rendered. MATLAB communicates with the SceneServer application through a Java library, which is connected to an interface in SceneServer. The graphics are visualised using Open Scene Graph (OSG) that in turn uses OpenGL. OSG is an open source cross-platform scene graph library for visualisation of real-time graphics. OpenGL is a software interface for creating advanced computer graphics in 2D and 3D.
|
5 |
COLLADA Audio : A Formal Representation of Sound in Virtual Cities by a Scene Description Language / Le son dans COLLADA : une représentation formelle du son dans les villes virtuelles avec des langages de description de scèneChan, Shih-Han 20 December 2012 (has links)
Depuis de nombreuses années, des formats de fichier standardisés ont été conçus pour écrire, lire et échanger des descriptions de scènes 3D. Ces descriptions sont principalement faites pour des contenus visuels; les options accessibles pour les compositions audio des scènes virtuelles sont, dans les meilleurs des cas, pauvres et dans les pires, manquantes. C’est pourquoi nous proposons d’inclure une description sonore riche dans le COLLADA qui est un format standard pour d’échange d’assets numériques. La plupart des langages de description qui incluent une description sonore factorisent les éléments communs aux informations visuelles et sonores. Ces deux aspects sont par exemple décrits dans le même système de coordonnées. Cependant, dès lors qu’une description dynamique ou que des données externes sont requises, toutes les liaisons doivent être faites de manière programmée. Dans cette thèse, nous tentons de résoudre ce problème et nous proposons de donner plus de puissance créative aux sound designers même lorsque les scènes sont dynamiques ou basées sur de la synthèse procédurale. Cette solution est basée sur le schéma COLLADA dans lequel nous avons ajouté la description sonore, des capacités de scripting et des extensions externes. L’utilisation de ce langage COLLADA ainsi augmenté est illustrée à travers la création d’un paysage sonore urbain. / Standardized file formats has been conceived since many years to write, read, and exchange 3D scene descriptions. These descriptions are mainly for visual contents whereas options given for audio compositions of virtual scenes are either lacking or poor. Therefore, we propose to include a rich sound description in the COLLADA, which is a standard format for exchanging digital assets. Most scene description languages with a sound description factorize common elements needed by the graphical and auditory information. Both aspects are, for example, described with the same coordinate system. However, as soon as a dynamic description or external data are required, all the glue must be done by a programming approach. In this thesis, we address this problem and propose to give more creative power in the hands of sound designers even when the scene is dynamic or based on procedural synthesizers. This solution is based on the COLLADA schema in which we add the sound support, scripting capabilities and external extensions. The use of the augmented COLLADA language is illustrated through the creation of dynamic urban soundscape.
|
6 |
Uma contribuição à automatização da atividade de teste para sistemas de realidade virtual / A contribution to the automation of testing activity for virtual reality systemsSouza, Alinne Cristinne Corrêa 06 June 2017 (has links)
O teste de software é considerado uma atividade importante para a revelação de falhas. Apesar desta vantagem, tem sido pouco explorado no âmbito de aplicações de Realidade Virtual (RV). Dentre as lacunas existentes, a definição e automatização de critérios de teste de software para esse domínio foi identificada, uma vez que esses sistemas possuem características próprias que requerem definição ou adaptação de técnicas de teste, fazendo com que aplicações nesse domínio constituam sistemas de alta complexidade. Diante disso, o objetivo desta tese é apresentar uma abordagem denominada Virtual Reality-Requirements Specification and Testing (VR-ReST) que visa apoiar a especificação de requisitos de aplicações de RV com base na descrição de casos de uso e conceitos do domínio de RV e Grafo de Cena (GC), derivar requisitos de teste e gerar dados de teste a partir dos requisitos especificados. Além disso, é apresentado um apoio ferramental chamado de Virtual Requirements Specification and Testing (ViReST), que permite automatizá-las. A abordagem é composta por três módulos: (i) especificação dos requisitos por meio do auxílio de um modelo denominado Virtual Requirements Specification (ViReS); (ii) mapeamento dos requisitos por meio de uma linguagem semi-formal chamada Behavior Language Requirement Specification (BeLaRS) para garantir uma especificação padronizada; e (iii) geração automática dos requisitos de teste e dos dados de teste. Foi realizado um estudo de caso para avaliar a conformidade e a usabilidade da BeLaRS em auxiliar a especificação de requisitos de uma aplicação de RV. Além disso, também foi realizado um experimento para avaliar a eficácia da abordagem VR-ReST por meio da ferramenta ViReST. Usando teste de mutação neste último experimento, a abordagem VR-ReST alcançou um escore de mutação médio de 15,49% maior que o teste aleatório. Portanto, os resultados mostraram que a abordagem, bem como o apoio ferramental, podem auxiliar o projetista durante a atividade de especificação de requisitos e o testador na geração dos testes para aplicações de RV. / Software testing is considered an important activity towards fault revealing. Despite this advantage, it has been few explored within the scope of Virtual Reality (VR) applications. Among the existing gaps, the definition and automation of software testing criteria for this domain were identified, since these systems have their own characteristics that require definition or adaptation of testing techniques, making applications in this domain constitute highly complex systems. Therefore, a Virtual Reality-Requirements Specification and Testing (VR-ReST) approach is presented to perform the functional test of VR applications using Scene Graph (SG) concepts and a support tool called Virtual Requirements Specification And Testing (ViReST), which allows you to automate them. The approach is composed of three modules: (i) the first consists in specifying the requirements by means of a model called Virtual Requirements Specification (ViReS); (ii) the second involves mapping the requirements through a semi-formal language called Behavior Language Requirement Specification (BeLaRS) to ensure a standardized specification; and (iii) the third is the automatic generation of test requirements and test data. A case study was conducted to evaluate the compliance and usability of BeLaRS in assisting the requirements specification of an RV application. Also, an experiment was also carried out to evaluate the effectiveness of the VR-ReST approach using the ViReST tool. Using mutation testing in this latter experiment, the VR-ResT approach achieved a mean mutation score of 15.49% higher than the random testing. Therefore, the results showed that the approach, as well as tooling support, can assist the designer during the requirement specification activity and the tester in generating the tests for RV applications.
|
7 |
Extension of the Rule-Based Programming Language XL by Concepts for Multi-Scaled Modelling and Level-of-Detail VisualizationOng, Yongzhi 27 April 2015 (has links)
No description available.
|
8 |
Uma contribuição à automatização da atividade de teste para sistemas de realidade virtual / A contribution to the automation of testing activity for virtual reality systemsAlinne Cristinne Corrêa Souza 06 June 2017 (has links)
O teste de software é considerado uma atividade importante para a revelação de falhas. Apesar desta vantagem, tem sido pouco explorado no âmbito de aplicações de Realidade Virtual (RV). Dentre as lacunas existentes, a definição e automatização de critérios de teste de software para esse domínio foi identificada, uma vez que esses sistemas possuem características próprias que requerem definição ou adaptação de técnicas de teste, fazendo com que aplicações nesse domínio constituam sistemas de alta complexidade. Diante disso, o objetivo desta tese é apresentar uma abordagem denominada Virtual Reality-Requirements Specification and Testing (VR-ReST) que visa apoiar a especificação de requisitos de aplicações de RV com base na descrição de casos de uso e conceitos do domínio de RV e Grafo de Cena (GC), derivar requisitos de teste e gerar dados de teste a partir dos requisitos especificados. Além disso, é apresentado um apoio ferramental chamado de Virtual Requirements Specification and Testing (ViReST), que permite automatizá-las. A abordagem é composta por três módulos: (i) especificação dos requisitos por meio do auxílio de um modelo denominado Virtual Requirements Specification (ViReS); (ii) mapeamento dos requisitos por meio de uma linguagem semi-formal chamada Behavior Language Requirement Specification (BeLaRS) para garantir uma especificação padronizada; e (iii) geração automática dos requisitos de teste e dos dados de teste. Foi realizado um estudo de caso para avaliar a conformidade e a usabilidade da BeLaRS em auxiliar a especificação de requisitos de uma aplicação de RV. Além disso, também foi realizado um experimento para avaliar a eficácia da abordagem VR-ReST por meio da ferramenta ViReST. Usando teste de mutação neste último experimento, a abordagem VR-ReST alcançou um escore de mutação médio de 15,49% maior que o teste aleatório. Portanto, os resultados mostraram que a abordagem, bem como o apoio ferramental, podem auxiliar o projetista durante a atividade de especificação de requisitos e o testador na geração dos testes para aplicações de RV. / Software testing is considered an important activity towards fault revealing. Despite this advantage, it has been few explored within the scope of Virtual Reality (VR) applications. Among the existing gaps, the definition and automation of software testing criteria for this domain were identified, since these systems have their own characteristics that require definition or adaptation of testing techniques, making applications in this domain constitute highly complex systems. Therefore, a Virtual Reality-Requirements Specification and Testing (VR-ReST) approach is presented to perform the functional test of VR applications using Scene Graph (SG) concepts and a support tool called Virtual Requirements Specification And Testing (ViReST), which allows you to automate them. The approach is composed of three modules: (i) the first consists in specifying the requirements by means of a model called Virtual Requirements Specification (ViReS); (ii) the second involves mapping the requirements through a semi-formal language called Behavior Language Requirement Specification (BeLaRS) to ensure a standardized specification; and (iii) the third is the automatic generation of test requirements and test data. A case study was conducted to evaluate the compliance and usability of BeLaRS in assisting the requirements specification of an RV application. Also, an experiment was also carried out to evaluate the effectiveness of the VR-ReST approach using the ViReST tool. Using mutation testing in this latter experiment, the VR-ResT approach achieved a mean mutation score of 15.49% higher than the random testing. Therefore, the results showed that the approach, as well as tooling support, can assist the designer during the requirement specification activity and the tester in generating the tests for RV applications.
|
9 |
Applicability of Detection Transformers in Resource-Constrained Environments : Investigating Detection Transformer Performance Under Computational Limitations and Scarcity of Annotated DataSenel, Altan January 2023 (has links)
Object detection is a fundamental task in computer vision, with significant applications in various domains. However, the reliance on large-scale annotated data and computational resource demands poses challenges in practical implementation. This thesis aims to address these complexities by exploring self-supervised training approaches for the detection transformer(DETR) family of object detectors. The project investigates the necessity of training the backbone under a semi-supervised setting and explores the benefits of initializing scene graph generation architectures with pretrained DETReg and DETR models for faster training convergence and reduced computational resource requirements. The significance of this research lies in the potential to mitigate the dependence on annotated data and make deep learning techniques more accessible to researchers and practitioners. By overcoming the limitations of data and computational resources, this thesis contributes to the accessibility of DETR and encourages a more sustainable and inclusive approach to deep learning research. / Objektigenkänning är en grundläggande uppgift inom datorseende, med betydande tillämpningar inom olika domäner. Dock skapar beroendet av storskaliga annoterade data och krav på datorkraft utmaningar i praktisk implementering. Denna avhandling syftar till att ta itu med dessa komplexiteter genom att utforska självövervakade utbildningsmetoder för detektions transformer (DETR) familjen av objektdetektorer. Projektet undersöker nödvändigheten av att träna ryggraden under en semi-övervakad inställning och utforskar fördelarna med att initiera scenegrafgenereringsarkitekturer med förtränade DETReg-modeller för snabbare konvergens av träning och minskade krav på datorkraft. Betydelsen av denna forskning ligger i potentialen att mildra beroendet av annoterade data och göra djupinlärningstekniker mer tillgängliga för forskare och utövare. Genom att övervinna begränsningarna av data och datorkraft, bidrar denna avhandling till tillgängligheten av DETR och uppmuntrar till en mer hållbar och inkluderande inställning till djupinlärning forskning.
|
10 |
Entwurf und Implementierung eines computergraphischen Systems zur Integration komplexer, echtzeitfähiger 3D-Renderingverfahren / Design and implementation of a graphics system to integrate complex, real-time capable 3D rendering algorithmsKirsch, Florian January 2005 (has links)
Thema dieser Arbeit sind echtzeitfähige 3D-Renderingverfahren, die 3D-Geometrie mit über der Standarddarstellung hinausgehenden Qualitäts- und Gestaltungsmerkmalen rendern können. Beispiele sind Verfahren zur Darstellung von Schatten, Reflexionen oder Transparenz. Mit heutigen computergraphischen Software-Basissystemen ist ihre Integration in 3D-Anwendungssysteme sehr aufwändig: Dies liegt einerseits an der technischen, algorithmischen Komplexität der Einzelverfahren, andererseits an Ressourcenkonflikten und Seiteneffekten bei der Kombination mehrerer Verfahren. Szenengraphsysteme, intendiert als computergraphische Softwareschicht zur Abstraktion von der Graphikhardware, stellen derzeit keine Mechanismen zur Nutzung dieser Renderingverfahren zur Verfügung.<br><br>
Ziel dieser Arbeit ist es, eine Software-Architektur für ein Szenengraphsystem zu konzipieren und umzusetzen, die echtzeitfähige 3D-Renderingverfahren als Komponenten modelliert und es damit erlaubt, diese Verfahren innerhalb des Szenengraphsystems für die Anwendungsentwicklung effektiv zu nutzen. Ein Entwickler, der ein solches Szenengraphsystem nutzt, steuert diese Komponenten durch Elemente in der Szenenbeschreibung an, die die sichtbare Wirkung eines Renderingverfahrens auf die Geometrie in der Szene angeben, aber keine Hinweise auf die algorithmische Implementierung des Verfahrens enthalten. Damit werden Renderingverfahren in 3D-Anwendungssystemen nutzbar, ohne dass ein Entwickler detaillierte Kenntnisse über sie benötigt, so dass der Aufwand für ihre Entwicklung drastisch reduziert wird.<br><br>
Ein besonderer Augenmerk der Arbeit liegt darauf, auf diese Weise auch verschiedene Renderingverfahren in einer Szene kombiniert einsetzen zu können. Hierzu ist eine Unterteilung der Renderingverfahren in mehrere Kategorien erforderlich, die mit Hilfe unterschiedlicher Ansätze ausgewertet werden. Dies erlaubt die Abstimmung verschiedener Komponenten für Renderingverfahren und ihrer verwendeten Ressourcen.<br><br>
Die Zusammenarbeit mehrerer Renderingverfahren hat dort ihre Grenzen, wo die Kombination von Renderingverfahren graphisch nicht sinnvoll ist oder fundamentale technische Beschränkungen der Verfahren eine gleichzeitige Verwendung unmöglich machen. Die in dieser Arbeit vorgestellte Software-Architektur kann diese Grenzen nicht verschieben, aber sie ermöglicht den gleichzeitigen Einsatz vieler Verfahren, bei denen eine Kombination aufgrund der hohen Komplexität der Implementierung bislang nicht erreicht wurde. Das Vermögen zur Zusammenarbeit ist dabei allerdings von der Art eines Einzelverfahrens abhängig: Verfahren zur Darstellung transparenter Geometrie beispielsweise erfordern bei der Kombination mit anderen Verfahren in der Regel vollständig neuentwickelte Renderingverfahren; entsprechende Komponenten für das Szenengraphsystem können daher nur eingeschränkt mit Komponenten für andere Renderingverfahren verwendet werden.<br><br>
Das in dieser Arbeit entwickelte System integriert und kombiniert Verfahren zur Darstellung von Bumpmapping, verschiedene Schatten- und Reflexionsverfahren sowie bildbasiertes CSG-Rendering. Damit stehen wesentliche Renderingverfahren in einem Szenengraphsystem erstmalig komponentenbasiert und auf einem hohen Abstraktionsniveau zur Verfügung. Das System ist trotz des zusätzlichen Verwaltungsaufwandes in der Lage, die Renderingverfahren einzeln und in Kombination grundsätzlich in Echtzeit auszuführen. / This thesis is about real-time rendering algorithms that can render 3D-geometry with quality and design features beyond standard display. Examples include algorithms to render shadows, reflections, or transparency. Integrating these algorithms into 3D-applications using today’s rendering libraries for real-time computer graphics is exceedingly difficult: On the one hand, the rendering algorithms are technically and algorithmically complicated for their own, on the other hand, combining several algorithms causes resource conflicts and side effects that are very difficult to handle. Scene graph libraries, which intend to provide a software layer to abstract from computer graphics hardware, currently offer no mechanisms for using these rendering algorithms, either.<br><br>
The objective of this thesis is to design and to implement a software architecture for a scene graph library that models real-time rendering algorithms as software components allowing an effective usage of these algorithms for 3D-application development within the scene graph library. An application developer using the scene graph library controls these components with elements in a scene description that describe the effect of a rendering algorithm for some geometry in the scene graph, but that do not contain hints about the actual implementation of the rendering algorithm. This allows for deploying rendering algorithms in 3D-applications even for application developers that do not have detailed knowledge about them. In this way, the complexity of development of rendering algorithms can be drastically reduced.<br><br>
In particular, the thesis focuses on the feasibility of combining several rendering algorithms within a scene at the same time. This requires to classify rendering algorithms into different categories, which are, each, evaluated using different approaches. In this way, components for different rendering algorithms can collaborate and adjust their usage of common graphics resources.<br><br>
The possibility of combining different rendering algorithms can be limited in several ways: The graphical result of the combination can be undefined, or fundamental technical restrictions can render it impossible to use two rendering algorithms at the same time. The software architecture described in this work is not able to remove these limitations, but it allows to combine a lot of different rendering algorithms that, until now, could not be combined due to the high complexities of the required implementation. The capability of collaboration, however, depends on the kind of rendering algorithm: For instance, algorithms for rendering transparent geometry can be combined with other algorithms only with a complete redesign of the algorithm. Therefore, components in the scene graph library for displaying transparency can be combined with components for other rendering algorithms in a limited way only.<br><br>
The system developed in this work integrates and combines algorithms for displaying bump mapping, several variants of shadow and reflection algorithms, and image-based CSG algorithms. Hence, major rendering algorithms are available for the first time in a scene graph library as components with high abstraction level. Despite the required additional indirections and abstraction layers, the system, in principle, allows for using and combining the rendering algorithms in real-time.
|
Page generated in 0.0426 seconds