Spelling suggestions: "subject:"raytracing"" "subject:"backtracing""
161 |
Characteristics of Jovian Low-Frequency Radio Emissions during the Cassini and Voyager Flyby of Jupiter / CassiniとVoyager探査機の木星フライバイ時に観測された木星低周波電波の特性Imai, Masafumi 23 March 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(理学) / 甲第19504号 / 理博第4164号 / 新制||理||1598(附属図書館) / 32540 / 京都大学大学院理学研究科地球惑星科学専攻 / (主査)教授 田口 聡, 教授 家森 俊彦, 教授 余田 成男 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DGAM
|
162 |
Topographic Relief Correlated Monte Carlo 3D Radiative Transfer Simulator for Forests / 森林における地形効果を考慮したモンテカルロ3次元放射伝達シミュレータSheng-Ye, Jin 23 March 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(地球環境学) / 甲第20538号 / 地環博第159号 / 新制||地環||32(附属図書館) / 京都大学大学院地球環境学舎地球環境学専攻 / (主査)准教授 須崎 純一, 准教授 西前 出, 教授 柴田 昌三 / 学位規則第4条第1項該当 / Doctor of Global Environmental Studies / Kyoto University / DFAM
|
163 |
Accelerated Ray Traced Animations Exploiting Temporal CoherenceBaines, Darwin Tarry 08 July 2005 (has links) (PDF)
Ray tracing is a well-know technique for producing realistic graphics. However, the time necessary to generate images is unacceptably long. When producing the many frames that are necessary for animations, the time is magnified. Many methods have been proposed to reduce the calculations necessary in ray tracing. Much of the effort has attempted to reduce the number of rays cast or to reduce the number of intersection calculations. Both of these techniques exploit spatial coherence. These acceleration techniques are expanded not only to exploit spatial coherence but also to exploit temporal coherence in order to reduce calculations by treating animation information as a whole as opposed to isolating calculations to each individual frame. Techniques for exploiting temporal coherence are explored along with associated temporal bounding methods. By first ray tracing a temporally expanded scene, we are able to avoid traversal calculations in associated frames where object intersection is limited. This reduces the rendering times of the associated frames.
|
164 |
Point-Based Color Bleeding with VolumesGibson, Christopher J 01 June 2011 (has links) (PDF)
The interaction of light in our world is immensely complex, but with mod-
ern computers and advanced rendering algorithms, we are beginning to reach
the point where photo-realistic renders are truly difficult to separate from real
photographs. Achieving realistic or believable global illumination in scenes with
participating media is exponentially more expensive compared to our traditional
polygonal methods. Light interacts with the particles of a volume, creating com-
plex radiance patterns.
In this thesis, we introduce an extension to the commonly used point-based
color bleeding (PCB) technique, implementing volume scatter contributions. With
the addition of this PCB algorithm extension, we are able to render fast, be-
lievable in- and out-scattering while building on existing data structures and
paradigms.
The proposed method achieves results comparable to that of existing Monte
Carlo integration methods, obtaining render speeds between 10 and 36 times
faster while keeping memory overhead under 5%.
|
165 |
GPU-Accelerated Point-Based Color BleedingSchmitt, Ryan Daniel 01 June 2012 (has links) (PDF)
Traditional global illumination lighting techniques like Radiosity and Monte Carlo sampling are computationally expensive. This has prompted the development of the Point-Based Color Bleeding (PBCB) algorithm by Pixar in order to approximate complex indirect illumination while meeting the demands of movie production; namely, reduced memory usage, surface shading independent run time, and faster renders than the aforementioned lighting techniques.
The PBCB algorithm works by discretizing a scene’s directly illuminated geometry into a point cloud (surfel) representation. When computing the indirect illumination at a point, the surfels are rasterized onto cube faces surrounding that point, and the constituent pixels are combined into the final, approximate, indirect lighting value.
In this thesis we present a performance enhancement to the Point-Based Color Bleeding algorithm through hardware acceleration; our contribution incorporates GPU-accelerated rasterization into the cube-face raster phase. The goal is to leverage the powerful rasterization capabilities of modern graphics processors in order to speed up the PBCB algorithm over standard software rasterization. Additionally, we contribute a preprocess that generates triangular surfels that are suited for fast rasterization by the GPU, and show that new heterogeneous architecture chips (e.g. Sandy Bridge from Intel) simplify the code required to leverage the power of the GPU. Our algorithm reproduces the output of the traditional Monte Carlo technique with a speedup of 41.65x, and additionally achieves a 3.12x speedup over software-rasterized PBCB.
|
166 |
Development of Tools Needed for Radiation Analysis of a Cubesat Deployer Using OltarisGonzalez-Dorbecker, Marycarmen 01 August 2015 (has links) (PDF)
Currently, the CubeSat spacecraft is predominantly used for missions at Low- Earth Orbit (LEO). There are various limitations to expanding past that range, one of the major ones being the lack of sufficient radiation shielding on the Poly-Picosatellite Orbital Deployer (P-POD). The P-POD attaches to a launch vehicle transporting a primary spacecraft and takes the CubeSats out into their orbit. As the demand for interplanetary exploration grows, there is an equal increase in interest in sending CubeSats further out past their current regime. In a collaboration with NASA’s Jet Propulsion Laboratory (JPL), students from the Cal Poly CubeSat program worked on a preliminary design of an interplanetary CubeSat deployer, the Poly-Picosatellite Deep Space Deployer (PDSD). Radiation concerns were mitigated in a very basic manner, by simply increasing the thickness of the deployer wall panels. While this provided a preliminary idea for improved radiation shielding, full analysis was not conducted to determine what changes to the current P-POD are necessary to make it sufficiently radiation hardened for interplanetary travel.
This thesis develops a tool that can be used to further analyze the radiation environment concerns that come up with interplanetary travel. This tool is the connection between any geometry modeled in CAD software and the radiation tool OLTARIS (On- Page iv Line Tool for the Assessment of Radiation In Space). It reads in the CAD file and converts it into MATLAB, at which point it can then perform ray-tracing analysis to get a thickness distribution at any user-defined target points. This thickness distribution file is uploaded to OLTARIS for radiation analysis of the user geometry.
To demonstrate the effectiveness of the tool, the radiation environment that a CubeSat sees inside of the current P-POD is characterized to create a radiation map that CubeSat developers can use to better design their satellites. Cases were run to determine the radiation in a low altitude orbit compared to a high altitude orbit, as well as a Europa mission. For the LEO trajectory, doses were seen at levels of 102 mGy, while the GEO trajectory showed results at one order of magnitude lower. Electronics inside the P-POD can survive these doses with the current design, confirming that Earth orbits are safe for CubeSats. The Europa- Jovian Tour mission showed results on a higher scale of 107 mGy, which is too high for electronics in the P-POD. Additional cases at double the original thickness and 100 times the original thickness resulted in dose levels at orders of about 107 and 104 mGy respectively. This gives a scale to work off for a “worst case” scenario and provides a path forward to modifying the shielding on deployers for interplanetary missions. Further analysis is required since increasing the existing P-POD thickness by 100 times is unfeasible from both size and mass perspectives. Ultimately, the end result is that the current P-POD standard does not work too far outside of Earth orbits. Radiation-based changes in the design, materials, and overall shielding of the P- POD need to be made before CubeSats can feasibly perform interplanetary missions.
|
167 |
Accelerating Ray Casting Using Culling Techniques to Optimize K-D TreesNguyen, Anh Viet 01 August 2012 (has links) (PDF)
Ray tracing is a graphical technique that provides realistic simulation of light sources and complex lighting effects within three-dimensional scenes, but it is a time-consuming process that requires a tremendous amount of compute power. In order to reduce the number of calculations required to render an image, many different algorithms and techniques have been developed. One such development is the use of tree-like data structures to partition space for quick traversal when finding intersection points between rays and primitives. Even with this technique, ray-primitive intersection for large datasets is still the bottleneck for ray tracing.
This thesis proposes the use of a specific spatial data structure, the K-D tree, for faster ray casting of primary rays and enables a ray-triangle culling technique that compliments view frustum and backface culling. The proposed method traverses the entire tree structure to mark nodes to be inactive if it is outside of the view frustum and skipped if the triangle is a backface. In addition, a ray frustum is calculated to test the spatial coherency of the primary ray. The combination of these optimizations reduces the average number of intersection tests per ray from 98% to 99%, depending on the data size.
|
168 |
Exploring Material Representations for Sparse Voxel DAGsPineda, Steven 01 June 2021 (has links) (PDF)
Ray tracing is a popular technique used in movies and video games to create compelling visuals. Ray traced computer images are increasingly becoming more realistic and almost indistinguishable from real-word images. Due to the complexity of scenes and the desire for high resolution images, ray tracing can become very expensive in terms of computation and memory. To address these concerns, researchers have examined data structures to efficiently store geometric and material information. Sparse voxel octrees (SVOs) and directed acyclic graphs (DAGs) have proven to be successful geometric data structures for reducing memory requirements. Moxel DAGs connect material properties to these geometric data structures, but experience limitations related to memory, build times, and render times. This thesis examines the efficacy of connecting an alternative material data structure to existing geometric representations.
The contributions of this thesis include the creation of a new material representation using hashing to accompany DAGs, a method to calculate surface normals using neighboring voxel data, and a demonstration and validation that DAGs can be used to super sample based on proximity. This thesis also validates the visual acuity from these methods via a user survey comparing different output images. In comparison to the Moxel DAG implementation, this work increases render time, but reduces build times and memory, and improves the visual quality of output images.
|
169 |
Optical and Thermal Analysis of a Heteroconical Tubular Cavity Solar ReceiverMaharaj, Neelesh 25 October 2022 (has links) (PDF)
The principal objective of this study is to develop, investigate and optimise the Heteroconical Tubular Cavity receiver for a parabolic trough reflector. This study presents a three-stage development process which allowed for the development, investigation and optimisation of the Heteroconical receiver. The first stage of development focused on the investigation into the optical performance of the Heteroconical receiver for different geometric configurations. The effect of cavity geometry on the heat flux distribution on the receiver absorbers as well as on the optical performance of the Heteroconical cavity was investigated. The cavity geometry was varied by varying the cone angle and cavity aperture width of the receiver. This investigation led to identification of optical characteristics of the Heteroconical receiver as well as an optically optimised geometric configuration for the cavity shape of the receiver. The second stage of development focused on the thermal and thermodynamic performance of the Heteroconical receiver for different geometric configurations. This stage of development allowed for the investigation into the effect of cavity shape and concentration ratio on the thermal performance of the Heteroconical receiver. The identification of certain thermal characteristics of the receiver further optimised the shape of the receiver cavity for thermal performance during the second stage of development. The third stage of development and optimisation focused on the absorber tubes of the Heteroconical receiver. This enabled further investigation into the effect of tube diameter on the total performance of the Heteroconical receiver and led to an optimal inner tube diameter for the receiver under given operating conditions. In this work, the thermodynamic performance, conjugate heat transfer and fluid flow of the Heteroconical receiver were analysed by solving the computational governing Equations set out in this work known as the Reynolds-Averaged Navier-Stokes (RANS) Equations as well as the energy Equation by utilising the commercially available CFD code, ANSYS FLUENT®. The optical model of the receiver which modelled the optical performance and produced the nonuniform actual heat flux distribution on the absorbers of the receiver was numerically modelled by solving the rendering Equation using the Monte-Carlo ray tracing method. SolTrace - a raytracing software package developed by the National Renewable Energy Laboratory (NREL), commonly used to analyse CSP systems, was utilised for modelling the optical response and performance of the Heteroconical receiver. These actual non-uniform heat flux distributions were applied in the CFD code by making use of user-defined functions for the thermal model and analysis of the Heteroconical receiver. The numerical model was applied to a simple parabolic trough receiver and reflector and validated against experimental data available in the literature, and good agreement was achieved. It was found that the Heteroconical receiver was able to significantly reduce the amount of reradiation losses as well as improve the uniformity of the heat flux distribution on the absorbers. The receiver was found to produce thermal efficiencies of up to 71% and optical efficiencies of up to 80% for practically sized receivers. The optimal receiver was compared to a widely used parabolic trough receiver, a vacuum tube receiver. It was found that the optimal Heteroconical receiver performed, on average, 4% more efficiently than the vacuum tube receiver across the temperature range of 50-210℃. In summary, it was found that the larger a Heteroconical receiver is the higher its optical efficiency, but the lower its thermal efficiency. Hence, careful consideration needs to be taken when determining cone angle and concentration ratio of the receiver. It was found that absorber tube diameter does not have a significant effect on the performance of the receiver, but its position within the cavity does have a vital role in the performance of the receiver. The Heteroconical receiver was found to successfully reduce energy losses and was found to be a successfully high performance solar thermal tubular cavity receiver.
|
170 |
Vat Photopolymerization of High-Performance Materials through Investigation of Crosslinked Network Design and Light Scattering ModelingFeller, Keyton D. 08 June 2023 (has links)
The reliance on low-viscosity and photoactive resins limits the accessible properties for vat photopolymerization (VP) materials required for engineering applications. This has limited the adoption of VP for producing end-use parts, which typically require high MW polymers and/or more stable chemical functionality. Decoupling the viscosity and molecular weight relationship for VP resins has been completed recently for polyimides and highperformance elastomers by photocuring a scaffold around polymer precursors or polymer nanoparticles, respectively. Both of these materials are first shaped by printing a green part followed by thermal post-processing to achieve the final part properties. This dissertation focuses on improving the processability of these material systems by (i) investigating the impact of scaffold architecture and polysalt monomer composition on photocuring, thermal post-processing, and resulting thermomechanical properties and (ii) developing a Monte Carlo ray-tracing (MCRT) simulation to predict light scattering and photocuring behavior in particle-filled resins, specifically zinc oxide nanoparticles in a rigid polyester resin and styrene butadiene rubber latex resin.
The first portion of the dissertation introduces VP of a tetra-acid and half-ester-based polysalt resin derived from 4,4'-oxydiphthalic anhydride and 4,4-oxydianiline (ODPA-ODA), a fully aromatic polyimide with high glass transition temperature and thermal stability. This polyimide, and polyimides like this, find use in demanding industries such as aerospace, automotive and electronic applications. The author evaluated the hypothesis that a non-bound triethylene glycol dimethacrylate (TEGDMA) scaffold would facilitate more efficient scaffold burnout and thus achieve parts with reduced off-gassing potential at elevated temperatures.
Both resins demonstrated photocuring and were able to print solid and complex latticed parts. When thermally processed to 400 oC, only 3% of the TEGDMA scaffold remained within the final parts. The half-ester resin exhibits higher char yield, resulting from partial degradation of the polyimide backbone, potentially caused by lack of solvent retention limiting the imidization conversion. The tetra-acid exhibits a Tg of 260oC, while the half-ester displays a higher Tg of 380 oC caused by the degradation of the polymer backbone, forming residual char, restricting chain mobility. Solid parts displayed a phase-separated morphology while the half-ester latticed parts appear solid, indicating solvent removal occurs faster in the half-ester composition, presumably due to reduced polar acid functionality. This platform and scaffold architecture enables a modular approach to produce novel and easily customizable UV-curable polyimides to easily increase the variety of polyimides and the accessible properties of printed polyimides through VP.
The second section of this dissertation describes the creation and validation of a MCRT simulation to predict light scattering and the resulting photocured shape of a ZnO-filled resin nanocomposite. Relative to prior MCRT simulations in the literature, this approach requires only simple, easily acquired inputs gathered from dynamic light scattering, refractometry, UV-vis spectroscopy, beam profilometry, and VP working curves to produce 2D exposure distributions. The concentration of 20 nm ZnO varied from 1 to 5 vol% and was exposed to a 7X7 pixel square ( 250 µm) from 5 to 11 s. Compared to experimentally produced cure profiles, the MCRT simulation is shown to predict cure depth within 10% (15 µm) and cure widths within 30% (20 µm), below the controllable resolution of the printer. Despite this success, this study was limited to small particles and low loadings to avoid polycrystalline particles and maintain dispersion stability for the duration of the experiments.
Expanding the MCRT simulation to latex-based resins which are comprised of polymer nanoparticles that are amorphous, homogeneous, and colloidally stable. This allows for validating the MCRT with larger particles (100 nm) at higher loadings. Simulated cure profiles of styrene-butadiene rubber (SBR) loadings from 5 vol% to 25 vol% predicted cure depths within 20% ( µm) and cure widths within 50% ( µm) of experimental values. The error observed within the latex-based resin is significantly higher than in the ZnO resin and potentially caused by the green part shrinking due to evaporation of the resin's water, which leads to errors when trying to experimentally measure the cure profiles.
This dissertation demonstrates the development of novel and functional materials and creation process-related improvements. Specifically, this dissertation presents a materials platform for the future development of unique photocurable engineering polymers and a corresponding physics-based model to aid in processing. / Doctor of Philosophy / Vat Photopolymerization (VP) is a 3D printing process that uses ultraviolet (UV) light to selectively cure liquid photosensitive resin into a solid part in a layer-by-layer fashion. Parts produced with VP exhibit a smooth surface finish and fine features of less than 100 µm (i.e., width of human hair). Recoating the liquid resin for each layer limits VP to low-viscosity resins, thus limiting the molecular weight (and thus performance) of the printed polymers accessible. Materials that are low molecular weight are limited in achieving desirable properties, such as elongation, strength, and heat resistance. Solvent-based resins, such as polysalt and latex resins have demonstrated the ability to decouple the viscosity and molecular weight relationship by eliminating polymer entanglements using low-molecular-weight precursors or isolating high-molecular-weight polymers into particles. This dissertation focuses on expanding and improving the printability of these methods.
The second chapter of the dissertation investigates the impact of scaffold architecture in printing polyimide polysalts to improve scaffold burnout. Polysalts are polymers that exist as dissolved salts in solution, with each monomer holding two electronic charges. When heated, the solvent evaporates and the monomers react to form a high molecular-weight polymer. While previous work featured a polysalt that was covalently bonded to the monomers, the polysalt in this work is made printable by co-dissolving a scaffold. The polysalt resins are photocured and thermally processed to polymerize and imidize into a high-molecular-weight polymer, while simultaneously pyrolyzing the scaffold. Using a co-dissolved scaffold allows the investigation of two different monomers of tetra-acid and half-ester functionality. The half-ester composition underwent degradation during heating, increasing the printed parts' glass transition or softening point. The scaffold had little impact on the polysalt polymerization or final part properties and was efficiently removed, with only 3% remaining in final parts. The composition and properties of the monomers selected played a bigger role due to partial degradation altering the properties of the final parts. Overall, this platform and scaffold architecture allows for a larger number of polyimides to be accessible and easily customizable for future VP demands.
The third chapter describes the challenges of processing photocurable resins that contain particles due to the UV light scattering in the resin vat during printing. When the light from the printer hits a particle, it is scattered in all directions causing the layer shape to be distorted from the designed shape. To overcome this, a Monte Carlo ray-tracing (MCRT) simulation was developed to mimic light rays scattering within the resin vat. The simulation was validated by comparing simulation results against experiment trials of photocuring resins containing 20nm zinc oxide (ZnO) nanoparticles. The MCRT simulation predicted all the experimental cure depths within 10% (20 µm) and cured widths within 30% (15 µm) error.
Despite the high accuracy, this study was limited to small particles and low concentrations.
Simulating larger particles is difficult as the simulation assumes each particle to be uniform throughout its volume, which is atypical of large ceramic particles.
The fourth chapter enables high particle volume loading by using a highly stretchable styrene-butadiene rubber (SBR) latex-based resin. Latex-based resins maintain low viscosity by separating large polymer chains into nano-particles that are noncrystalline and uniform.
When the chains are separated, they cannot interact or entangle, keeping the viscosity low even at high concentrations (>30 vol%). Like the ZnO-filled resin, the latex resin is experimentally cured and the MCRT simulation predicts the resulting cure shape. The MCRT simulation predicted cure depths within 20% (100 µm) and over-cure widths within 50% (100 µm) of experimental values. This error is substantially higher than the ZnO work and is believed to be caused by the water evaporating from the cured resin resulting in inconsistent measurements of the cured dimensions.
|
Page generated in 0.0662 seconds