Spelling suggestions: "subject:"backtracing"" "subject:"bytracing""
201 |
Generalized Statistical Tolerance Analysis and Three Dimensional Model for Manufacturing Tolerance Transfer in Manufacturing Process PlanningJanuary 2011 (has links)
abstract: Mostly, manufacturing tolerance charts are used these days for manufacturing tolerance transfer but these have the limitation of being one dimensional only. Some research has been undertaken for the three dimensional geometric tolerances but it is too theoretical and yet to be ready for operator level usage. In this research, a new three dimensional model for tolerance transfer in manufacturing process planning is presented that is user friendly in the sense that it is built upon the Coordinate Measuring Machine (CMM) readings that are readily available in any decent manufacturing facility. This model can take care of datum reference change between non orthogonal datums (squeezed datums), non-linearly oriented datums (twisted datums) etc. Graph theoretic approach based upon ACIS, C++ and MFC is laid out to facilitate its implementation for automation of the model. A totally new approach to determining dimensions and tolerances for the manufacturing process plan is also presented. Secondly, a new statistical model for the statistical tolerance analysis based upon joint probability distribution of the trivariate normal distributed variables is presented. 4-D probability Maps have been developed in which the probability value of a point in space is represented by the size of the marker and the associated color. Points inside the part map represent the pass percentage for parts manufactured. The effect of refinement with form and orientation tolerance is highlighted by calculating the change in pass percentage with the pass percentage for size tolerance only. Delaunay triangulation and ray tracing algorithms have been used to automate the process of identifying the points inside and outside the part map. Proof of concept software has been implemented to demonstrate this model and to determine pass percentages for various cases. The model is further extended to assemblies by employing convolution algorithms on two trivariate statistical distributions to arrive at the statistical distribution of the assembly. Map generated by using Minkowski Sum techniques on the individual part maps is superimposed on the probability point cloud resulting from convolution. Delaunay triangulation and ray tracing algorithms are employed to determine the assembleability percentages for the assembly. / Dissertation/Thesis / Ph.D. Mechanical Engineering 2011
|
202 |
Projeto ótico de linha de luz de raios-X duros para cristalografia de proteínas / Optical design of a hard X-ray beamline to protein crystallographyGrizolli, Walan Cesar 16 August 2018 (has links)
Orientador: Antônio Rubens Britto de Castro / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Fisica Gleb Wataghin / Made available in DSpace on 2018-08-16T08:21:17Z (GMT). No. of bitstreams: 1
Grizolli_WalanCesar_M.pdf: 22164883 bytes, checksum: 87cb39dcd42129ec1d9a3f9ec6fcba09 (MD5)
Previous issue date: 2010 / Resumo: Diversas áreas da ciência moderna têm sido beneficiadas pelo uso da radiação síncrotron. As técnicas disponveis em laboratórios deste tipo abrangem pesquisas em ciências básicas como Física, Química e Biologia bem como em áreas como engenharia de materiais e farmacologia. Em particular, o uso do espectro de raios-x duros (8-20keV) das fontes síncrotron atuais e crucial para técnicas estruturais como difração e cristalografia. Estas técnicas estão disponíveis para a comunidade científica brasileira no Laboratório Nacional de Luz Síncrotron (LNLS), cujas linhas de luz estão abertas a usuários externos desde 1997. As instalações do LNLS vem sendo constantemente aperfeiçoadas, permitindo o estudo de novos casos científicos como consequência destas melhorias. Neste trabalho estudamos as propriedades da fonte de radiação síncrotron do LNLS e realizamos um projeto óptico para uma linha de luz direcionada a cristalografia de proteínas. Com uso de simulações computacionais propomos opções para melhoria do fluxo de flotons nas linhas de cristalografia de proteínas já existentes. Nossos estudos apontam para a viabilidade de construção de uma linha lateral a linha de luz MX2, utilizando a mesma fonte Wiggler já instalada, com fluxo proporcional à linha central. / Abstract: Distinct research fields in modern science have taken advantage of synchrotron radiation. The techniques that are available in such laboratories have a very broad scope, ranging from basic sciences such as Physics, Chemistry and Biology, to applied fields as engineering and pharmacology. In particular, the use of hard x-ray spectrum (8-20keV) from modern synchrotron sources is crucial for structural techniques such as diffraction and crystallography. These techniques have been available to the Brazilian scientific community since 1997, when the Brazilian Synchrotron Light Laboratory (LNLS) facilities were opened to external users. The LNLS beamlines have constantly evolved , allowing the users to perform novel experiments as a consequence of instrumental improvements. In this work we study the properties of the LNLS sources and propose solutions for the optics of a beamline dedicated to protein crystallography. By using computer simulations we propose options to enhance the photon flux in the pre-existing protein crystallography beamlines. Our results point out to the feasibility of a lateral beamline using the MX2 wiggler source, with similar flux to the central beamline. / Mestrado / Física / Mestre em Física
|
203 |
[en] EFFICIENT SOLUTION OF AN INTEGRAL EQUATION AND ITS APPLICATIONS TO THE PREDICTION OF THE COVERAGE OF CELLULAR SYSTEMS / [pt] SOLUÇÃO EFICIENTE DE UMA EQUAÇÃO INTEGRAL E SUA APLICAÇÃO NA PREVISÃO DA COBERTURA DE SISTEMAS CELULARESFELIX KORBLA AKORLI 17 April 2006 (has links)
[pt] Neste trabalho, foi desenvolvido um algoritmo para a
solução de uma equação integral que possa ser aplicado de
forma eficiente na previsão da cobertura de sistemas
celulares. Este algoritmo compara-se com o algoritmo
proposto originalmente e com a mesma aplicação baseada na
técnica de traçado de raios. A técnica de traçado de raios
utiliza o método das imagens no traçado dos raios
pertencentes a trinta diferentes classes, caracterizadas
pela multiplicidade e pelos tipos e ordens das interações
com o terreno. Os coeficientes de reflexão e de difração
são calculados pelas equações clássicas de Fresnel e pelas
equações da Teoria Uniforme da Difração, respectivamente,
modificadas para considerar a rugosidade do terreno. Ambos
os algoritmos foram aplicados para calcular a intensidade
de campo ao longo de um perfil de terreno e em uma região
montanhosa do sul do Estado de Minas Gerais. Os resultados
são comparados entre si e com os resultados de medidas,
quando disponíveis. Esta comparação considera tanto os
resultados obtidos quanto os tempos de processamento
necessários em ambos os casos. / [en] In this work an algorithm for the solution of an integral
equation, which can efficiently be applied to estimate the
coverage areaof a cellular system, has been developed.
This algorithm is compared with the original proposed
algorithm and also woth Ray tracing Techiniques for the
same applications. The Ray Tracing Tecniques uses the
image Tecnique to trace thirty different classes of rays,
which are characterised by multiplicity of the paths and
also the nature of graund. The coefficients of reflection
and diffraction have been calculeted by using the
classical Fresnel and the Uniform Theory of Diffraction
equations respectively, and have been modified to include
the roufhness of the ground. Both algorithms have been
used to calculate the fiels intensity along a land profile
and also a mountainous region South comparison takes into
consideration the quality of the results as well as the
processing time of both algorithms.
|
204 |
Ray Tracing on GPU : Performance comparison between the CPU and the Compute Shader with DirectX 11Persson, Gustav, Udd, Jonathan January 2010 (has links)
The game industry have always looked for rendering techniques that makes the games as good looking and realistic as possible. The common approach is to use triangles built up by vertices and apply many different techniques to make it look as good as possible. When triangles are used to draw objects, there is always edges and those edges often make the objects look less realistic than desired. To reduce these visible edges the amount of triangles for an object have to be increased, but with more triangles more processing power from the graphics cards is needed. Another way to approach rendering is ray tracing which can render an extremely photo realistic image but to the cost of unbearable low performance if you would use it in a realtime application. The reason raytracing is so slow is the massive amount of calculations that needs to be made. In DirectX 11 a few new shaders where announced and one of them were the compute shader, the compute shader allows you to calculate data on the graphics card which is not bound to the pipeline. The compute shader allows you to use the hundreds of cores that the graphic card has and is therefore well suited for a raytracing algorithm. One application is used to see if the hypothesis is correct. A flag is used to define if the application runs on the CPU and the GPU. The same algorithm is used in both versions. Three test where done on each processing unit to confirm the hypothesis. Three more tests where done on the GPU to see how the performance scaled on the GPU depending on the number of rendered objects. The tests proved throughout that the compute shader performs considerably better than the CPU when running our ray tracing algorithm.
|
205 |
Modélisation dynamique des canaux MIMO pour les transports ferroviaires / Dynamic modeling of MIMO channels for railway transportHairoud, Siham 02 July 2012 (has links)
L'exploitation, le contrôle et la signalisation des systèmes de métros modernes et en particulier les métros sans conducteur, reposent sur deux familles de systèmes de communication radio sans fil : les transmissions vitales pour la signalisation et le contrôle des trains à faible débit et les transmissions non vitales très haut débit pour la vidéo surveillance embarquée, le télé-diagnostic,l'information des clients, etc. Lors de la mise en œuvre de tels systèmes de transmission, l'industriel doit donc garantir à l'exploitant de métro des performances concernant les débits et la qualité de la transmission en termes de perte de paquets d'information. Dès lors, appréhender le comportement du canal de propagation est un élément clé pour prédire et améliorer les performances des systèmes de transmission.L'objectif de ces travaux de thèse est double et concerne :• D'une part, la réduction des temps de calcul d'un simulateur de canal à tracé de rayons 3D qui prédit avec précision le comportement du canal de propagation, mais reste coûteux en temps de calculs pour les simulations dynamiques. Nous proposons dans cette thèse une méthode qui s'appuie sur trois critères de visibilité pour simplifier la description de lagéométrie de l'environnement sans pénaliser la prédiction des paramètres caractéristiques du canal de propagation ;• D'autre part, la mise en œuvre d'un nouveau modèle de canal de propagation MIMO dynamique en tunnel rectiligne de section rectangulaire qui permettra d'optimiser des systèmes de transmission multi antennaires (MIMO) pour des applications de transmissions sans fil pour les métros en tunnel. Ce modèle s'inspire globalement du modèle de canal utilisé / The exploitation, control and signalling systems for metro and especially modern driverless metros, are based on two families of radio communication systems Wireless : vital transmissions for signalling and train control low-flow and nonvital transmissions with very high bandwidth for video-tracking systems, telediagnosis, customer information, etc.. During the implementation ofsuch transmission systems, the industrial must therefore ensure the metro operator performance on throughput and transmission quality in terms of packet loss information. Therefore, understanding the behavior of the propagation channel is a key to predict and improve performance of transmission systems.The aim of this thesis is twin and concerns :• On the one hand, the computation time reduction of a 3D ray tracing simulator that accurately predicts the behavior of the propagation channel, but it is expensive in times of calculation for dynamic simulations. We propose here a new method based on three visibility criteria to simplify the geometry description of the environment without degrading the characteristic parameters prediction of the propagation channel ;• On the other hand, the construction of a new model for MIMO propagation channel dynamics in straight tunnel of rectangular section which will optimize transmission systems multi-antennary (MIMO) applications for wireless transmission in metros tunnel. This model draws broadly from the channel model used in the standard WINNER and is fed by the results extracted from the 3D ray tracing channel simulator. The results obtained in this thesis are encouraging and offer many opportunities.
|
206 |
The effect of skin phototype on laser propagation through skinKarsten, Aletta Elizabeth 01 May 2013 (has links)
The use of lasers for diagnosis and treatment in medical and cosmetic applications is increasing worldwide. Not all of these modalities are superficial and many require laser light to penetrate some distance into the tissue or skin to reach the treatment site. Human skin is highly scattering for light in the visible and near infrared wavelength regions, with a consequent reduction of the fluence rate. Melanin, which occurs in the epidermis of the skin, acts as an absorber in these wavelength regions and further reduces the fluence rate of light that penetrates through the epidermis to a treatment site. In vivo fluence rate measurements are not viable, but validated and calibrated computer models may play a role in predicting the fluence rate reaching the treatment site. A layered planar computer model to predict laser fluence rate at some depth into skin was developed in a commercial raytracing environment (ASAP). The model describes the properties of various skin layers and accounts for both the absorption and scattering taking place in the skin. The model was validated with optical measurements on skin-simulating phantoms in both reflectance and transmission configurations. It was shown that a planar epidermal/dermal interface is adequate for simulation purposes. In the near infrared wavelength region (676 nm), melanin (consisting of eumelanin and pheomelanin) is the major absorber of light in the epidermis. The epidermal absorption coefficient is one of the required input parameters for the computer model. The range of absorption coefficients expected for typical South African skin phototypes (ranging from photo-sensitive light skin, phototype I on the Fitzpatrick scale, to the photo-insensitive darker skin phototype V) was not available. Non-invasive diffuse reflectance spectroscopy measurements were done on 30 volunteers to establish the expected range of absorption coefficients. In the analysis it became apparent that the contributions of the eumelanin and pheomelanin must be accounted for separately, specifically for the Asian volunteers. This is a new concept that was introduced in the diffuse reflectance probe analysis. These absorption coefficient measurements were the first to be done on the expected range of skin phototypes for the South African population. Other authors dealing with diffuse reflectance probe analysis only account for the dominant eumelanin. Both the epidermal absorption coefficient and thickness are important in the prediction of the fluence rate loss. The computer model was used to evaluate the effect of the epidermal absorption coefficient (a parameter dictated by an individual’s skin phototype) and the epidermal thickness on the fluence rate loss through the skin. The epidermal absorption is strongly wavelength dependent with the higher absorption at the shorter wavelengths. In the computer model a longer wavelength of 676 nm (typical for a photodynamic treatment (PDT) of cancer) was used. For the darker skin phototypes (V) only about 30% of the initial laser fluence rate reached a depth of 200 ìm into the skin (just into the dermis). For the PDT application, results from the computer model indicated that treatment times need to be increased by as much as 50% for very dark skin phototypes when compared to that of very light phototypes. / Thesis (PhD)--University of Pretoria, 2012. / Physics / unrestricted
|
207 |
Entwicklung des Neutronentransportcodes TransRay und Untersuchungen zur zwei- und dreidimensionalen Berechnung effektiver GruppenwirkungsquerschnitteBeckert, C. January 2008 (has links)
Standardmäßig erfolgt die Datenaufbereitung der Neutronenwirkungsquerschnitte für Reaktorkernrechnungen mit 2D-Zellcodes. Ziel dieser Arbeit war es, einen 3D-Zellcode zu entwickeln, mit diesem Code 3D-Effekte zu untersuchen und die Notwendigkeit einer 3D-Datenaufbereitung der Neutronenwirkungsquerschnitte zu bewerten. Zur Berechnung des Neutronentransports wurde die Methode der Erststoßwahrscheinlichkeiten, die mit der Ray-Tracing-Methode berechnet werden, gewählt. Die mathematischen Algorithmen wurden in den 2D/3D-Zellcode TransRay umgesetzt. Für den Geometrieteil des Programms wurde das Geometriemodul eines Monte-Carlo-Codes genutzt. Das Ray-Tracing in 3D wurde auf Grund der hohen Rechenzeiten parallelisiert. Das Programm TransRay wurde an 2D-Testaufgaben verifiziert. Für einen Druckwasser-Referenzreaktor wurden folgende 3D-Probleme untersucht: Ein teilweise eingetauchter Regelstab und Void (Vakuum oder Dampf) um einen Brennstab als Modell einer Dampfblase. Alle Probleme wurden zum Vergleich auch mit den Programmen HELIOS (2D) und MCNP (3D) nachgerechnet. Die Abhängigkeit des Multiplikationsfaktors und der gemittelten Zweigruppenquerschnitte von der Eintauchtiefe des Regelstabes bzw. von der Höhe der Dampfblase wurden untersucht. Die 3D berechneten Zweigruppenquerschnitte wurden mit drei üblichen Näherungen verglichen: Lineare Interpolation, Interpolation mit Flusswichtung und Homogenisierung. Am 3D-Problem des Regelstabes zeigte sich, dass die Interpolation mit Flusswichtung eine gute Näherung ist. Demnach ist hier eine 3D-Datenaufbereitung nicht notwendig. Beim Testfall des einzelnen Brennstabs, der von Void umgeben ist, erwiesen sich die drei Näherungen für die Zweigruppenquerschnitte als unzureichend. Demnach ist eine 3D-Datenaufbereitung notwendig. Die einzelne Brennstabzelle mit Void kann als der Grenzfall eines Reaktors angesehen werden, in dem sich eine Phasengrenzfläche herausgebildet hat.
|
208 |
Entwicklung eines 3D Neutronentransportcodes auf der Basis der Ray-Tracing-Methode und Untersuchungen zur Aufbereitung effektiver Gruppenquerschnitte für heterogene LWR-ZellenRohde, Ulrich [Projektleiter], Beckert, Carsten January 2006 (has links)
Standardmäßig erfolgt die Datenaufbereitung der Neutronenwirkungsquerschnitte für Reaktorkernrechnungen mit 2D-Zellcodes. Ziel dieser Arbeit war es, einen 3D-Zellcode zu entwickeln, mit diesem Code 3D-Effekte zu untersuchen und die Notwendigkeit einer 3D-Datenaufbereitung der Neutronenwirkungsquerschnitte zu bewerten. Zur Berechnung des Neutronentransports wurde die Methode der Erststoßwahrscheinlichkeiten, die mit der Ray-Tracing-Methode berechnet werden, gewählt. Die mathematischen Algorithmen wurden in den 2D/3D-Zellcode TransRay umgesetzt. Für den Geometrieteil des Programms wurde das Geometriemodul eines Monte-Carlo-Codes genutzt. Das Ray-Tracing wurde auf Grund der hohen Rechenzeiten parallelisiert. Das Programm TransRay wurde an 2D-Testaufgaben verifiziert. Für einen Druckwasser-Referenzreaktor wurden folgende 3D-Probleme untersucht: Ein teilweise eingetauchter Regelstab und Void (bzw. Moderator mit geringerer Dichte) um einen Brennstab als Modell einer Dampfblase. Alle Probleme wurden zum Vergleich auch mit den Programmen HELIOS (2D) und MCNP (3D) nachgerechnet. Die Abhängigkeit des Multiplikationsfaktors und der gemittelten Zweigruppenquerschnitte von der Eintauchtiefe des Regelstabes bzw. von der Höhe der Dampfblase wurden untersucht. Die 3D berechneten Zweigruppenquerschnitte wurden mit drei üblichen Näherungen verglichen: linearer Interpolation, Interpolation mit Flusswichtung und Homogenisierung. Am 3D-Problem des Regelstabes zeigte sich, dass die Interpolation mit Flusswichtung eine gute Näherung ist. Demnach ist hier eine 3D-Datenaufbereitung nicht notwendig. Beim Testfall des einzelnen Brennstabs, der von Void (bzw. Moderator geringerer Dichte) umgeben ist, erwiesen sich die drei Näherungen für die Zweigruppenquerschnitte als unzureichend. Demnach ist eine 3D-Datenaufbereitung notwendig. Die einzelne Brennstabzelle mit Void kann als der Grenzfall eines Reaktors angesehen werden, in dem sich eine Phasengrenzfläche herausgebildet hat.
|
209 |
Volumetric Solar Receiver for a Parabolic Dish and Micro-Gas Turbine system : Design, modelling and validation using Multi-Objective OptimizationMancini, Roberta January 2015 (has links)
Concentrated Solar Power (CSP) constitutes one suitable solution for exploiting solar resources for power generation. In this context, parabolic dish systems concentrate the solar radiation onto a point focusing receiver for small-scale power production. Given the modularity feature of such system, the scale-up is a feasible option; however, they offer a suitable solution for small scale off-grid electrification of rural areas. These systems are usually used with Stirling engines, nevertheless the coupling with micro-gas turbines presents a number of advantages, related to the reliability of the system and the lower level of maintenance required. The OMSoP project, funded by the European Union, aims at the demonstration of a parabolic dish coupled with an air-driven Brayton cycle. By looking at the integrated system, a key-role is played by the solar receiver, whose function is the absorption of the concentrated solar radiation and its transfer to the heat transfer fluid. Volumetric solar receivers constitute a novel and promising solution for such applications; the use of a porous matrix for the solar radiation absorption allows reaching higher temperature within a compact volume, while reducing the heat transfer losses between the fluid and the absorption medium. The aim of the present work is to deliver a set of optimal design specifications for a volumetric solar receiver for the OMSoP project. The work is based on a Multi-Objective Optimization algorithm, with the objective of the enhancement of the receiver thermal efficiency and of the reduction of the pressure drop. The optimization routine is coupled with a detailed analysis of the component, based on a Computational Fluid Dynamics model and a Mechanical Stress Analysis. The boundary conditions are given by the OMSoP project, in terms of dish specifications and power cycle, whilst the solar radiation boundary is modelled by means of a Ray Tracing routine. The outcome of the analysis is the assessment of the impact on the receiver performance of some key design parameters, namely the porous material properties and the receiver geometrical dimensions. From the results, it is observed a general low pressure drop related to the nominal air mass flow, with several points respecting the materials limitations. One design point is chosen among the optimal points, which respects the OMSoP project requirements for the design objectives, i.e. a minimum value of efficiency of 70%, and pressure losses below 1%. The final receiver configuration performs with an efficiency value of 86%, with relative pressure drop of 0.5%, and it is based on a ceramic foam absorber made of silicon carbide, with porosity value of 0.94. Moreover, the detailed analysis of one volumetric receiver configuration to be integrated in the OMSoP project shows promising results for experimental testing and for its actual integration in the system.
|
210 |
Alpha Tested Geometry in DXR : Performance Analysis of Asset Data VariationsFast, Tobias January 2020 (has links)
Background. Ray tracing can be used to achieve hyper-realistic 3D rendering but it is a computationally heavy task. Since hardware support for real-time ray tracing was released, the game industry has been introducing this feature into games. However, even modern hardware still experience performance issues when implementing common rendering techniques with ray tracing. One of these problematic techniques is alpha testing. Objectives. The thesis will investigate the following: 1) How the texture format of the alpha map and the number of alpha maps affect the rendering times. 2) How tessellation of the alpha tested geometry affects the performance and if tessellation has the potential to fully replace the alpha test from a performance perspective. Methods. A DXR 3D renderer will be implemented capable of rendering alpha tested geometry using an any-hit shader. The renderer was used to conduct a computational performance benchmark of the rendering times while varying texture and geometry data. Two alpha tested tree models were tessellated to various levels and their related textures were converted into multiple formats that could be used for the test scenes. Results & Conclusions. When the texture formats BC7, R(1xfloat32), and BC4 were used for the alpha map, the rendering times decreased in all cases, relative RGBA(4xfloat32). BC4 showed to give the best performance gain, decreasing the rendering times with up to 17% using one alpha map per model and up to 43% using eight alpha maps. When increasing the number of alpha maps used per model the rendering times increased with up to 52% when going from one alpha map to two. A large increase in rendering times was observed when going from three to four alpha maps in all cases. Using alpha testing on the tessellated model versions increased the rendering times in most cases, at most 135%. A decrease of up to 8% was however observed when the models were tessellated a certain amount. Turning off alpha testing gave a significant decrease in rendering allowing higher tessellated versions to be rendered for all models. In one case, while increasing the number of triangles with a factor of 78 the rendering times were still decreased by 30% relative to the original alpha test implementation. This suggests that pre-tessellated models could potentially be used to replace alpha tessellated geometry when performance is highly required. / Bakgrund. Strålspårning(Ray tracing) kan användas för att uppnå hyperrealistisk 3D-rendering, men det är en mycket tung beräkningsuppgift. Sedan hårdvarustöd för att utföra strålspårning i realtid lanserades har spelindustrin introducerat funktionen i spel. Trots modern hårdvara upplevers fortfarande prestandaproblem när vanliga renderingstekniker kombineras med strålspårning. En av dessa problematiska tekniker är alfa-testning(alpha testing). Syfte. Denna avhandling kommer att undersöka följande: 1) Hur texturformatet på alfamasken(alpha map) och hur antalet alfamaskar påverkar renderingstiderna. 2) På vilket sätt tesselering av den alfa-testade geometrin påverkar prestandan och om tesselering har potentialen att ersätta alfa-testet helt ur ett prestandaperspektiv. Metod. En DXR 3D-renderare kommer att implementeras som kan rendera alfatestad geometri med hjälp av en “Any-hit” shader. Renderaren användes för att mäta och jämföra renderingstider givet varierande textur- och geometri-data. Två alfaprövade trädmodeller tesselaterades till olika nivåer och deras relaterade texturer konverterades till fyra format som användes i testscenerna. Resultat & Slutsatser. När texturformaten BC7, R(1xfloat32) och BC4 användes för alfamasken visade alla en minskad renderingstid relativ RGBA (4xfloat32). BC4 gav bästa prestandaökningen och minskade renderingstiden med upp till 17% med en alfamask per modell och upp till 43% med åtta alfamasker. När antalet alfamasker som användes per modell ökade renderingstiderna med upp till 52% när alfamaskerna ökade från en till två. En stor ökning av renderingstiden observerades när alfamaskerna gick från tre till fyra i alla testfall. När alfatestning användes på de tesselerade modellversionerna ökade renderingstiderna i de flesta fall, som högst 135%. En minskning på upp till 8% observerades emellertid när modellerna tesselaterades till en viss grad. Att stänga av alfatestning gav en signifikant ökning av prestandan, vilket tillät högre tesselerade versioner att renderas för alla modeller. Samtidigt som antalet trianglar ökade med en faktor på 78, i ett av fallen, minskades renderingstiden med 30%. Detta antyder att förtesselerade modeller potentiellt kan användas för att ersätta alfatestad geometri när prestanda är ett högt krav.
|
Page generated in 0.0667 seconds