• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 71
  • 14
  • 10
  • 10
  • 7
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • Tagged with
  • 225
  • 73
  • 54
  • 51
  • 47
  • 34
  • 26
  • 25
  • 24
  • 24
  • 20
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Virtual Reality Simulation of Ships and Ship-Mounted Cranes

Daqaq, Mohammed F. 27 May 2003 (has links)
We present a virtual simulation of ships and ship-mounted cranes. The simulation is carried out in a Cave Automated Virtual Environment (CAVE). This simulation serves as a platform to study the dynamics of ships and ship-mounted cranes under dynamic sea environments and as a training platform for ship-mounted crane operators. A model of the (Auxiliary Crane Ship) T-ACS 4-6 was built, converted into an OpenGL C++ API, and then ported into the CAVE using DiverseGL (DGL). A six-degrees-of-freedom motion base was used to simulate the actual motion of the ship. The equations of motion of the ship are solved using the Large Amplitude Motion Program (LAMP), while the equations of motion of the crane payload are numerically integrated; the interaction between the payload and the ship is taken into consideration. A nonlinear delayed-position feedback-control system is applied to the crane and the resulting simulation is used to compare the controlled and uncontrolled pendulations of the cargo. Our simulator showed a great deal of realism and was used to simulate different ship-motion and cargo transfer scenarios. This work received support from the Office of Naval Research under Grant No. N00014-99-1-0562. / Master of Science
52

The Simulation System for Propagation of Fire and Smoke

Shulga, Dmitry N 10 May 2003 (has links)
This work presents a solution for a real-time fire suppression control system. It also serves as a support tool that allows creation of virtual ship models and testing them against a range of representative fire scenarios. Model testing includes generating predictions faster than real time, using the simulation network model developed by Hughes Associates, Inc., their visualization, as well as interactive modification of the model settings through the user interface. In the example, the ship geometry represents ex-USS Shadwell, test area 688, imitating a submarine. Applying the designed visualization techniques to the example model revealed the ability of the system to process, store and render data much faster than the real time (in average, 40 times faster).
53

A system to demonstrate applications of OpenGL using visual C++

Jiao, Juming 01 July 2000 (has links)
No description available.
54

Segmentação e reconhecimento de gestos em tempo real com câmeras e aceleração gráfica / Real-time segmentation and gesture recognition with cameras and graphical acceleration

Dantas, Daniel Oliveira 15 March 2010 (has links)
O objetivo deste trabalho é reconhecer gestos em tempo real apenas com o uso de câmeras, sem marcadores, roupas ou qualquer outro tipo de sensor. A montagem do ambiente de captura é simples, com apenas duas câmeras e um computador. O fundo deve ser estático, e contrastar com o usuário. A ausência de marcadores ou roupas especiais dificulta a tarefa de localizar os membros. A motivação desta tese é criar um ambiente de realidade virtual para treino de goleiros, que possibilite corrigir erros de movimentação, posicionamento e de escolha do método de defesa. A técnica desenvolvida pode ser aplicada para qualquer atividade que envolva gestos ou movimentos do corpo. O reconhecimento de gestos começa com a detecção da região da imagem onde se encontra o usuário. Nessa região, localizamos as regiões mais salientes como candidatas a extremidades do corpo, ou seja, mãos, pés e cabeça. As extremidades encontradas recebem um rótulo que indica a parte do corpo que deve representar. Um vetor com as coordenadas das extremidades é gerado. Para descobrir qual a pose do usuário, o vetor com as coordenadas das suas extremidades é classificado. O passo final é a classificação temporal, ou seja, o reconhecimento do gesto. A técnica desenvolvida é robusta, funcionando bem mesmo quando o sistema foi treinado com um usuário e aplicado a dados de outro. / Our aim in this work is to recognize gestures in real time with cameras, without markers or special clothes. The capture environment setup is simple, uses just two cameras and a computer. The background must be static, and its colors must be different the users. The absence of markers or special clothes difficults the location of the users limbs. The motivation of this thesis is to create a virtual reality environment for goalkeeper training, but the technique can be applied in any activity that involves gestures or body movements. The recognition of gestures starts with the background subtraction. From the foreground, we locate the more proeminent regions as candidates to body extremities, that is, hands, feet and head. The found extremities receive a label that indicates the body part it may represent. To classify the users pose, the vector with the coordinates of his extremities is compared to keyposes and the best match is selected. The final step is the temporal classification, that is, the gesture recognition. The developed technique is robust, working well even when the system was trained with an user and applied to another users data.
55

Segmentação e reconhecimento de gestos em tempo real com câmeras e aceleração gráfica / Real-time segmentation and gesture recognition with cameras and graphical acceleration

Daniel Oliveira Dantas 15 March 2010 (has links)
O objetivo deste trabalho é reconhecer gestos em tempo real apenas com o uso de câmeras, sem marcadores, roupas ou qualquer outro tipo de sensor. A montagem do ambiente de captura é simples, com apenas duas câmeras e um computador. O fundo deve ser estático, e contrastar com o usuário. A ausência de marcadores ou roupas especiais dificulta a tarefa de localizar os membros. A motivação desta tese é criar um ambiente de realidade virtual para treino de goleiros, que possibilite corrigir erros de movimentação, posicionamento e de escolha do método de defesa. A técnica desenvolvida pode ser aplicada para qualquer atividade que envolva gestos ou movimentos do corpo. O reconhecimento de gestos começa com a detecção da região da imagem onde se encontra o usuário. Nessa região, localizamos as regiões mais salientes como candidatas a extremidades do corpo, ou seja, mãos, pés e cabeça. As extremidades encontradas recebem um rótulo que indica a parte do corpo que deve representar. Um vetor com as coordenadas das extremidades é gerado. Para descobrir qual a pose do usuário, o vetor com as coordenadas das suas extremidades é classificado. O passo final é a classificação temporal, ou seja, o reconhecimento do gesto. A técnica desenvolvida é robusta, funcionando bem mesmo quando o sistema foi treinado com um usuário e aplicado a dados de outro. / Our aim in this work is to recognize gestures in real time with cameras, without markers or special clothes. The capture environment setup is simple, uses just two cameras and a computer. The background must be static, and its colors must be different the users. The absence of markers or special clothes difficults the location of the users limbs. The motivation of this thesis is to create a virtual reality environment for goalkeeper training, but the technique can be applied in any activity that involves gestures or body movements. The recognition of gestures starts with the background subtraction. From the foreground, we locate the more proeminent regions as candidates to body extremities, that is, hands, feet and head. The found extremities receive a label that indicates the body part it may represent. To classify the users pose, the vector with the coordinates of his extremities is compared to keyposes and the best match is selected. The final step is the temporal classification, that is, the gesture recognition. The developed technique is robust, working well even when the system was trained with an user and applied to another users data.
56

Vývoj grafických aplikací na iPhone a iPad platformě / Graphics Application Development on iPhone and iPad Platform

Fiala, Petr January 2012 (has links)
The project deals with the creation of graphical applications for iOS system, describes the basics of OpenGL ES 2.0, development environment Xcode, Cocoa Touch Framework and Objective-C language. It focuses on the description of creation OpenGL game in the genre of "line drawing" games.
57

Informational AR-Overlay Development for Remote Air-Traffic Control

Söderlund, Jonathan January 2023 (has links)
Kontroll av trafik på och kring flygfält är ett krav för att få industrin och dess relaterade tjänster att upnå den säkerhet och effektivitet som krävs i dagens samhälle. Detta har traditionellt gjorts med bemannade kontrolltorn med full utsikt över fältet. Dessa torn är dock kostsamma både i konstruktion, drift och bemanning. Flygtrafik kontroll på distans har varit i utveckling under de senaste åren. Dessa projekt har som mål att sänka kostnader och låta avlägsna flygplatser med lite eller säsongsbaserad trafik att ändå ha tillgång till flygkontrollanter vid behov året om. Sådan teknologi kräver att kamera och positionsbestämning arbetar tillsammans med ett användargränssnitt som presenterar all relevant information på ett effektivt och intuitivt sätt. Denna rapport täcker designval och teknologier använda i utvecklingen av ett gränssnitt som använder augumented reality för att binda renderade element till ett objekt i bild igenom att utnyttja dess GPS koordinater. Projektet har som mål att visa ett produktkoncept som realistiskt sett skulle kunna utvecklas till en kommersiell produkt. Den färdiga applikationen använder ett set av tidsstämplade GPS koordinater från fordon i trafik på ett flygfält samt video av området. Applikationen använder kamerakalibrering för att bestämma kamerans position och orientering för att sedan omvandla GPS position till skärmposition så att man kan binda grafiska element till GPS koordinaterna vilket i detta fall är fordonen i drift. Projektet utnyttjar OpenGL samt fritt tillgängliga bibliotek för att hantera uppspelning av videofil, avkodning av bildformat och projektion av koordinater inom applikationen. Resultatet visar ett troligt produktkoncept som kan utnyttjas inom framtida produkter och system samt hur ett sådant systems användargränssnitt kan se ut, då projektet använder välkända kalibreringsmetoder och bibliotek som används inom industrin. / Air- and ground traffic control of any airport or airfield is a vital task necessary to ensure the safety and reliability of the services it provides and the efficiency of the infrastructure. Traditionally done through manned towers with an outlook over the field where operators can guide the traffic into safe takeoffs and landings. This traditional air-traffic control tower is, however, costly to build, maintain and staff. The remotecontrolled tower has been in development for some time now, meant to reduce cost and provide operators to any field utilizing the technology when need arise, even in remote locations with season dependent usage. However, with such technology one needs a camera, geographic locators, and information interfaces to all work together. This report covers the design and technologies used to develop an Augmented Reality interface that anchors its position based on a geographic locator using the GPS system. Its objective is to showcase or conceptualize a product that could realistically be developed further into a commercially viable system. Used within the application is a subset of timestamped GPS data of moving vehicles and a recorded video of the same area. Using Camera Calibration algorithms to find the location of the camera and line up the position of calibration points and their viewed position in the video one can later track the vehicles and attach AR-interfaces over them. All of which is rendered using OpenGL and freely available libraries for video playback and more. The result showcases a viable conceptualization of what future products might look like using current graphical guidelines and well-known calibration methods using a graphics library that is well used within the industry and multiple application or game engines.
58

CET Viewer : Visualisering och interaktion med 3D-miljöer på multi-touch skärm / CET Viewer : Visualization and interaction with 3D environments on a multi-touch screen

Niskala, Victor January 2023 (has links)
Detta arbete har utforskat möjligheterna att exportera en ritning gjord i programvaran CET Designer och importera den i en iPad-applikation där man kan interagera med modellen i 3D med hjälp av enhetens multi-touchgränssnitt. Exporten gjordes till formaten obj och mtl med hjälp av CET Designers egna programmeringsspråk, CM. Applikationen för iPad byggdes genom att vidareutveckla en existerade applikation för obj-import och visualisering. Applikationen använder OpenGL ES för att visualisera 3Dmodellen, och ett antal fingergester användes för att manipulera scenen. Resultatet blev en prototyp som fick ett väldigt positivt resultat i ett användbarhetstest. Det som kunde göras för att förbättra utkomsten ännu mer var att använda direkt manipulation för navigeringen för att ge en bättre användarupplevelse, samt optimera export och import för att få lägre laddningstider och kräva mindre prestanda i iPaden. / This work has explored the possibilities to export a drawing made in the desktop software CET Designer and import it in an iPad application, where you can interact with the model in 3D with help of the device’s multi touch interface. The export was made to the formats obj and mtl with help of CET Designer’s own programming language, CM. The iPad application was built by further developing an existing application for obj imports and visualization. The application uses OpenGL ES to visualize the 3D model, and a set of finger gestures were used to manipulate the scene. The result of this was a prototype that had a very positive result in a usability test. To further improve the outcome, the application could be enhanced with direct manipulation to improve the user experience, and the export and import could be optimized to give lower loading times and less resource usage in the iPad
59

Rendering av geodata med OpenGL

Ingelborn, Marcus January 2020 (has links)
Den här studien undersökte om det är lönsamt eller ej att implementera hård-varustöd, med hjälp av OpenGL, för rendering av geografisk data. I detta fall innebar det skapande av kartbilder med tillfälliga föremål positionerade och inritade. Föremålen var under konstant förändring och en bildruta kunde inte antas se likadan ut som nästa. För att besvara på frågan användes en testmiljö hos företaget Saab och den öppna programvaran Geotools. En aktionsforskning genomfördes där en ny ren-deringsmodul till Geotools implementerades. Den nya och den föregående ren-deringsmodulen, från Geotools, testades och deras renderingstider uppmättes. Därefter analyserades mätresultaten och jämfördes med statistiska metoder. Renderingstiden för en bild i den tidigare renderingsmodulen tog i snitt mellan 481 och 495 ms med sannolikhet på 99,9%. Renderingstiden utfördes i snitt med den nya renderingsmodulen på mellan 145 och 150 ms med samma sannolikhet. Inom ett konfidensintervall på 99,9% minskade snittrenderingstiden med mellan 333 och 347 ms för den nyutvecklade modulen med hårdvarustöd.
60

A Visualization Tool for Drill Rig Simulators used in Software Development / ETT VISUALISERINGSVERKTYG FÖR BORRIGSSIMULATORER ATTANVÄNDA I MJUKVARUUTVECKLING

Larsson, Mikael January 2010 (has links)
<p>Boomer is a machine that is developed and produced by Atlas Copco Rock Drills AB, which is used for underground mining and tunneling. It is a blast-hole drilling rig equipped with drills that are attached to the arms, called booms, which the rig holds. The machine is controlled and monitored by Atlas Copco’s Rig Control System (RCS), which consists of a number of intelligent units connected in a CAN-net. When developing software for the RCS, a simulator that makes it possible to run the software on an ordinary desktop PC is used. The problem is that there is no intuitive way to see how the booms are oriented, while positioning. Therefore it is desirable to have a 3D visualization of the rig, with focus on the booms, which can be used alongside the simulator to get immediate feedback about the movements of the booms. This report describes the process of developing an application that handles communication with the simulator and the 3D visualization.</p> / <p>Boomer är en maskin som utvecklas och produceras av Atlas Copco Rock Drills AB. Maskinen används vid gruvbrytning och tunnelkonstruktion. Boomer är en spränghålsborrigg som är utrustad med borrar vilka är monterade på riggens armar, kallade bommar. En Boomer övervakas och kontrolleras av Atlas Copcos kontrollsystem, RCS, som är ett system bestående av intelligenta enheter sammankopplade i ett CAN-nät. Vid utveckling av mjukvara till RCS används en simulator som gör det möjligt att köra mjukvaran på en vanlig PC. Problemet är att det inte finns något intuitivt sätt att se hur bommarna är riktade medans de blir positionerade. Därför är det önskvärt med en 3D visualisering av borriggen, med fokus på dess boomar, som kan användas tillsammans med simulatorn för att ge en direkt återkoppling av boomarnas förflyttning. Denna rapport beskriver utvecklingsprocessen för en applikation som hanterar kommunikationen med simulatorn och 3D visualiseringen.</p>

Page generated in 0.0204 seconds