• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 54
  • 47
  • 45
  • 37
  • 19
  • 16
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 727
  • 318
  • 113
  • 77
  • 74
  • 66
  • 57
  • 54
  • 54
  • 51
  • 41
  • 41
  • 41
  • 37
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Optical and Thermal Analysis of a Heteroconical Tubular Cavity Solar Receiver

Maharaj, Neelesh 25 October 2022 (has links) (PDF)
The principal objective of this study is to develop, investigate and optimise the Heteroconical Tubular Cavity receiver for a parabolic trough reflector. This study presents a three-stage development process which allowed for the development, investigation and optimisation of the Heteroconical receiver. The first stage of development focused on the investigation into the optical performance of the Heteroconical receiver for different geometric configurations. The effect of cavity geometry on the heat flux distribution on the receiver absorbers as well as on the optical performance of the Heteroconical cavity was investigated. The cavity geometry was varied by varying the cone angle and cavity aperture width of the receiver. This investigation led to identification of optical characteristics of the Heteroconical receiver as well as an optically optimised geometric configuration for the cavity shape of the receiver. The second stage of development focused on the thermal and thermodynamic performance of the Heteroconical receiver for different geometric configurations. This stage of development allowed for the investigation into the effect of cavity shape and concentration ratio on the thermal performance of the Heteroconical receiver. The identification of certain thermal characteristics of the receiver further optimised the shape of the receiver cavity for thermal performance during the second stage of development. The third stage of development and optimisation focused on the absorber tubes of the Heteroconical receiver. This enabled further investigation into the effect of tube diameter on the total performance of the Heteroconical receiver and led to an optimal inner tube diameter for the receiver under given operating conditions. In this work, the thermodynamic performance, conjugate heat transfer and fluid flow of the Heteroconical receiver were analysed by solving the computational governing Equations set out in this work known as the Reynolds-Averaged Navier-Stokes (RANS) Equations as well as the energy Equation by utilising the commercially available CFD code, ANSYS FLUENT®. The optical model of the receiver which modelled the optical performance and produced the nonuniform actual heat flux distribution on the absorbers of the receiver was numerically modelled by solving the rendering Equation using the Monte-Carlo ray tracing method. SolTrace - a raytracing software package developed by the National Renewable Energy Laboratory (NREL), commonly used to analyse CSP systems, was utilised for modelling the optical response and performance of the Heteroconical receiver. These actual non-uniform heat flux distributions were applied in the CFD code by making use of user-defined functions for the thermal model and analysis of the Heteroconical receiver. The numerical model was applied to a simple parabolic trough receiver and reflector and validated against experimental data available in the literature, and good agreement was achieved. It was found that the Heteroconical receiver was able to significantly reduce the amount of reradiation losses as well as improve the uniformity of the heat flux distribution on the absorbers. The receiver was found to produce thermal efficiencies of up to 71% and optical efficiencies of up to 80% for practically sized receivers. The optimal receiver was compared to a widely used parabolic trough receiver, a vacuum tube receiver. It was found that the optimal Heteroconical receiver performed, on average, 4% more efficiently than the vacuum tube receiver across the temperature range of 50-210℃. In summary, it was found that the larger a Heteroconical receiver is the higher its optical efficiency, but the lower its thermal efficiency. Hence, careful consideration needs to be taken when determining cone angle and concentration ratio of the receiver. It was found that absorber tube diameter does not have a significant effect on the performance of the receiver, but its position within the cavity does have a vital role in the performance of the receiver. The Heteroconical receiver was found to successfully reduce energy losses and was found to be a successfully high performance solar thermal tubular cavity receiver.
372

Tracing Control with Linux Tracing Toolkit, next generation in a Containerized Environment

Ravi, Vikhram January 2021 (has links)
5G is becoming reality with companies rolling out the technology around the world. In 5G,the Radio Access Network (RAN) is moving from a monolithic-based architecture into a cloud-based microservice architecture for the purpose of simplifying deployment and manageability,and explore scalability and flexibility. Thus, the transition of functionalities from a proprietaryhardware-based system into a more distributed and flexible virtualized system is ongoing. Insuch systems, legacy methods performance monitoring is relevant, wheresystem tracingplaysan important role. System tracing is important for the purpose of performance analysis of anygiven system. However, current tools were designed thinking about monolith architectureswhere, therefore, in new distributed architectures, new tracing tools need to be developed. System tracing often requires special permissions to be executed in applications running ina virtualized third-party environment. Unfortunately, not all applications running in a dis-tributed virtualized environment can be given such special access, at the risk of compromis-ing security and stability of the system. However, tracing data needs to be also collected fromapplications running in such environments. This thesis addresses the challenge of remotely configuring and controlling the system tracingtool with the example of LTTng in applications that run as part of a distributed virtualizedenvironment with Kubernetes. We explore the problem of remotely controlling and configuringsystem tracing as well as to optimize data collection. The main outcome is a tool able to re-motely control and configure system tracing tools. In addition, a proof-of-concept is presentedwith working demos for basic system tracing commands. It was discovered that a relay-based solution can be exposed outside the cluster via node-portwhich can relay incoming requests on-wards to any number of microservices. However, dis-covery of these microservices that are running system tracing tools is critial. Service discoverymechanism’s were brought forth and introduced to the system for the purpose of disoveringmicroservices with system tracing tools. Tracing data that is saved locally can be extracted bythe user through the relay-based solution or sent directly to any remote system using LTTngrelay daemon functionality. Comparison between directly executing commands in a bash shelland the remote CLI was measured. It has been concluded that the overall the response timeof both Linux and LTTng commands that are sent through the remote CLI is 1.96 times longerthan directly executing these commands in a bash shell. This was accounted to the fact thatcommands sent over the network traffic within the kubernetes cluster which is the cost ofremotely being able to control and configure system tracing tools. This being said, there arestill many steps that can be taken to improve the solution and to develop a more productionready solution.i
373

Polarization and Hyperspectral Imaging for Synthetic Scene Rendering

Junjie Wang (17130997) 27 November 2023 (has links)
<p dir="ltr">Polarization and spectral imaging technology has wide application prospects and economic value in environmental detection, target recognition, remote sensing detection and industrial detection. However, the acquisition of hyperspectral or spectro-polarimetric imaging data is difficult and expensive in general. This study aims to develop a synthetic thermal imaging dataset using computer simulation. The study seeks to explore the simulation performance of Monte-Carlo path tracing algorithm in the fields of spectroscopy and thermal imaging. The goal is to provide a novel tool for effective and accurate dataset generation for thermal imaging neural networks training.</p>
374

Vat Photopolymerization of High-Performance Materials through Investigation of Crosslinked Network Design and Light Scattering Modeling

Feller, Keyton D. 08 June 2023 (has links)
The reliance on low-viscosity and photoactive resins limits the accessible properties for vat photopolymerization (VP) materials required for engineering applications. This has limited the adoption of VP for producing end-use parts, which typically require high MW polymers and/or more stable chemical functionality. Decoupling the viscosity and molecular weight relationship for VP resins has been completed recently for polyimides and highperformance elastomers by photocuring a scaffold around polymer precursors or polymer nanoparticles, respectively. Both of these materials are first shaped by printing a green part followed by thermal post-processing to achieve the final part properties. This dissertation focuses on improving the processability of these material systems by (i) investigating the impact of scaffold architecture and polysalt monomer composition on photocuring, thermal post-processing, and resulting thermomechanical properties and (ii) developing a Monte Carlo ray-tracing (MCRT) simulation to predict light scattering and photocuring behavior in particle-filled resins, specifically zinc oxide nanoparticles in a rigid polyester resin and styrene butadiene rubber latex resin. The first portion of the dissertation introduces VP of a tetra-acid and half-ester-based polysalt resin derived from 4,4'-oxydiphthalic anhydride and 4,4-oxydianiline (ODPA-ODA), a fully aromatic polyimide with high glass transition temperature and thermal stability. This polyimide, and polyimides like this, find use in demanding industries such as aerospace, automotive and electronic applications. The author evaluated the hypothesis that a non-bound triethylene glycol dimethacrylate (TEGDMA) scaffold would facilitate more efficient scaffold burnout and thus achieve parts with reduced off-gassing potential at elevated temperatures. Both resins demonstrated photocuring and were able to print solid and complex latticed parts. When thermally processed to 400 oC, only 3% of the TEGDMA scaffold remained within the final parts. The half-ester resin exhibits higher char yield, resulting from partial degradation of the polyimide backbone, potentially caused by lack of solvent retention limiting the imidization conversion. The tetra-acid exhibits a Tg of 260oC, while the half-ester displays a higher Tg of 380 oC caused by the degradation of the polymer backbone, forming residual char, restricting chain mobility. Solid parts displayed a phase-separated morphology while the half-ester latticed parts appear solid, indicating solvent removal occurs faster in the half-ester composition, presumably due to reduced polar acid functionality. This platform and scaffold architecture enables a modular approach to produce novel and easily customizable UV-curable polyimides to easily increase the variety of polyimides and the accessible properties of printed polyimides through VP. The second section of this dissertation describes the creation and validation of a MCRT simulation to predict light scattering and the resulting photocured shape of a ZnO-filled resin nanocomposite. Relative to prior MCRT simulations in the literature, this approach requires only simple, easily acquired inputs gathered from dynamic light scattering, refractometry, UV-vis spectroscopy, beam profilometry, and VP working curves to produce 2D exposure distributions. The concentration of 20 nm ZnO varied from 1 to 5 vol% and was exposed to a 7X7 pixel square ( 250 µm) from 5 to 11 s. Compared to experimentally produced cure profiles, the MCRT simulation is shown to predict cure depth within 10% (15 µm) and cure widths within 30% (20 µm), below the controllable resolution of the printer. Despite this success, this study was limited to small particles and low loadings to avoid polycrystalline particles and maintain dispersion stability for the duration of the experiments. Expanding the MCRT simulation to latex-based resins which are comprised of polymer nanoparticles that are amorphous, homogeneous, and colloidally stable. This allows for validating the MCRT with larger particles (100 nm) at higher loadings. Simulated cure profiles of styrene-butadiene rubber (SBR) loadings from 5 vol% to 25 vol% predicted cure depths within 20% ( µm) and cure widths within 50% ( µm) of experimental values. The error observed within the latex-based resin is significantly higher than in the ZnO resin and potentially caused by the green part shrinking due to evaporation of the resin's water, which leads to errors when trying to experimentally measure the cure profiles. This dissertation demonstrates the development of novel and functional materials and creation process-related improvements. Specifically, this dissertation presents a materials platform for the future development of unique photocurable engineering polymers and a corresponding physics-based model to aid in processing. / Doctor of Philosophy / Vat Photopolymerization (VP) is a 3D printing process that uses ultraviolet (UV) light to selectively cure liquid photosensitive resin into a solid part in a layer-by-layer fashion. Parts produced with VP exhibit a smooth surface finish and fine features of less than 100 µm (i.e., width of human hair). Recoating the liquid resin for each layer limits VP to low-viscosity resins, thus limiting the molecular weight (and thus performance) of the printed polymers accessible. Materials that are low molecular weight are limited in achieving desirable properties, such as elongation, strength, and heat resistance. Solvent-based resins, such as polysalt and latex resins have demonstrated the ability to decouple the viscosity and molecular weight relationship by eliminating polymer entanglements using low-molecular-weight precursors or isolating high-molecular-weight polymers into particles. This dissertation focuses on expanding and improving the printability of these methods. The second chapter of the dissertation investigates the impact of scaffold architecture in printing polyimide polysalts to improve scaffold burnout. Polysalts are polymers that exist as dissolved salts in solution, with each monomer holding two electronic charges. When heated, the solvent evaporates and the monomers react to form a high molecular-weight polymer. While previous work featured a polysalt that was covalently bonded to the monomers, the polysalt in this work is made printable by co-dissolving a scaffold. The polysalt resins are photocured and thermally processed to polymerize and imidize into a high-molecular-weight polymer, while simultaneously pyrolyzing the scaffold. Using a co-dissolved scaffold allows the investigation of two different monomers of tetra-acid and half-ester functionality. The half-ester composition underwent degradation during heating, increasing the printed parts' glass transition or softening point. The scaffold had little impact on the polysalt polymerization or final part properties and was efficiently removed, with only 3% remaining in final parts. The composition and properties of the monomers selected played a bigger role due to partial degradation altering the properties of the final parts. Overall, this platform and scaffold architecture allows for a larger number of polyimides to be accessible and easily customizable for future VP demands. The third chapter describes the challenges of processing photocurable resins that contain particles due to the UV light scattering in the resin vat during printing. When the light from the printer hits a particle, it is scattered in all directions causing the layer shape to be distorted from the designed shape. To overcome this, a Monte Carlo ray-tracing (MCRT) simulation was developed to mimic light rays scattering within the resin vat. The simulation was validated by comparing simulation results against experiment trials of photocuring resins containing 20nm zinc oxide (ZnO) nanoparticles. The MCRT simulation predicted all the experimental cure depths within 10% (20 µm) and cured widths within 30% (15 µm) error. Despite the high accuracy, this study was limited to small particles and low concentrations. Simulating larger particles is difficult as the simulation assumes each particle to be uniform throughout its volume, which is atypical of large ceramic particles. The fourth chapter enables high particle volume loading by using a highly stretchable styrene-butadiene rubber (SBR) latex-based resin. Latex-based resins maintain low viscosity by separating large polymer chains into nano-particles that are noncrystalline and uniform. When the chains are separated, they cannot interact or entangle, keeping the viscosity low even at high concentrations (>30 vol%). Like the ZnO-filled resin, the latex resin is experimentally cured and the MCRT simulation predicts the resulting cure shape. The MCRT simulation predicted cure depths within 20% (100 µm) and over-cure widths within 50% (100 µm) of experimental values. This error is substantially higher than the ZnO work and is believed to be caused by the water evaporating from the cured resin resulting in inconsistent measurements of the cured dimensions.
375

An experimental analysis of Link Prediction methods over Microservices Knowledge Graphs

Ruberto, Gianluca January 2023 (has links)
Graphs are a powerful way to represent data. They can be seen as a collection of objects (nodes) and the relationships between them (edges or links). The power of this structure has its intrinsic value in the relationship between data points that can even provide more information than the data properties. An important type of graph is Knowledge Graphs in which each node and edge has a type associated. Often graph data is incomplete and in this case, it is not possible to retrieve useful information. Link prediction, also known as knowledge graph completion, is the task of inferring if there are missing edges or nodes in a graph. Models of different types, including Machine Learning-based, Rule-based, and Neural Network-based models have been developed to address this problem. The goal of this research is to understand how link prediction methods perform in a real use-case scenario. Therefore, multiple models have been compared on different accuracy metrics and production case requirements on a microservice tracing dataset. Models have been trained and tested on two different knowledge graphs obtained from the data, one that takes into account the temporal information, and the other that does not. Moreover, the prediction of the models has been evaluated with what is usually done in the literature, and also mimicking a real use-case scenario. The comparison showed that too complex models cannot be used when the time, at training, and/or inference phase, is critical. The best model for traditional prediction has been RotatE which usually doubled the score of the second- best model. Considering the use-case scenario, RotatE was tied with QuatE, which required a lot more time for training and predicting. They scored 20% to 40% better than the third-best performing model, depending on the case. Moreover, most of the models required less than a millisecond for predicting a triplet, with NodePiece that was the fastest, beating ConvE by a 4% margin. For the training time, NodePiece beats AnyBURL by 40%. Considering the memory usage, again NodePiece is the best, by an order of magnitude of at least 10 when compared to most of the other models. RotatE has been considered the best model overall because it had the best accuracy and an above-average performance on the other requirements. Additionally, a simulation of the integration of RotatE with a dynamic sampling tracing tool has been carried out, showing similar results to the ones previously obtained. Lastly, a thorough analysis of the results and suggestions for future work are presented. / Grafer är ett kraftfullt sätt att representera data. De kan ses som en samling objekt (noder) och förhållandet mellan dem (kanter eller länkar). Kraften i denna struktur har sitt inneboende värde i förhållandet mellan datapunkter som till och med kan ge mer information än dataegenskaperna. En viktig typ av graf är Knowledge Graphs där varje nod och kant har en typ associerad. Ofta är grafdata ofullständiga och i det här fallet är det inte möjligt att hämta användbar information. Länkprediktion, även känd som färdigställande av kunskapsdiagram, är uppgiften att förutsäga om det saknas kanter eller noder i en graf. Modeller av olika typer, inklusive Machine Learning-baserade, Regelbaserade och Neural Network-baserade modeller har utvecklats för att lösa detta problem. Målet med denna forskning är att förstå hur länkprediktionsmetoder fungerar i ett verkligt use-case scenario. Därför har flera modeller jämförts med olika noggrannhetsmått och produktionsfallskrav på en mikrotjänstspårningsdatauppsättning. Modeller har tränats och testats på två olika kunskapsgrafer som erhållits från data, en som tar hänsyn till tidsinformationen och den andra som inte gör det. Dessutom har förutsägelsen av modellerna utvärderats med vad som vanligtvis görs i litteraturen, och även efterlikna ett verkligt use-case scenario. Jämförelsen visade att alltför komplexa modeller inte kan användas när tiden, vid träning och/eller slutledningsfasen, är kritisk. Den bästa modellen för traditionell förutsägelse har varit RotatE som vanligtvis fördubblade poängen för den näst bästa modellen. Med tanke på användningsfallet var RotatE knuten till QuatE, vilket krävde mycket mer tid för träning och förutsägelse. De fick 20% till 40% bättre än den tredje bäst presterande modellen, beroende på fallet. Dessutom krävde de flesta av modellerna mindre än en millisekund för att förutsäga en triplett, med NodePiece som var snabbast och slog ConvE med 4% marginal. För träningstiden slår NodePiece AnyBURL med 40%. Med tanke på minnesanvändningen är återigen NodePiece bäst, med en storleksordning på minst 10 jämfört med de flesta andra modeller. RotatE har ansetts vara den bästa modellen överlag eftersom den hade den bästa noggrannheten och en prestanda över genomsnittet för övriga krav. Dessutom har en simulering av integrationen av RotatE med ett dynamiskt samplingsspårningsverktyg utförts, som visar liknande resultat som de tidigare erhållna. Slutligen presenteras en grundlig analys av resultaten och förslag till framtida arbete.
376

From Reassurance to Deterrence : Tracing Small State Influence Within NATO Decision-Making

Lehto, Jesper January 2023 (has links)
This study investigates how small states can employ different strategies to overcome structural disadvantages and exert influence within NATO. To this end, the establishment of Enhanced Forward Presence (eFP) has been used as case study to explore how the three Baltic states – Estonia, Latvia and Lithuania – undertook deliberate efforts during the decision-making process to serve their long-time policy preferences. As part of a reinvigorated strategic approach, the eFP multinational battlegroups would ultimately constitute leading elements of the alliance’s deterrence and defence posture. Through careful process-tracing, this study illustrates how the Baltic states, both as a collective trio and within a larger coalition, mitigated their structural disadvantages by joining efforts to influence others. The cohesiveness among eastern allies, and moreover the concrete and tangible policy preferences of the Baltic states, benefited discussions where allies’ diverging strategic preferences needed to be reconciled to find an equitable compromise. The study includes eleven semi-structured interviews with senior officials who were directly or indirectly involved in the decision-making process, allowing for credible, first-hand insights into the internal development of NATO’s strategic shift.
377

Organization of prefrontal and premotor layer-specific pathways in rhesus monkeys

Bhatt, Hrishti 16 February 2024 (has links)
The Lateral Prefrontal Cortex (LPFC) and the Dorsal Premotor cortex (PMd) are two cortical structures that are involved in cognitive processes such as motor planning and decision-making. The LPFC is extensively connected to sensory, somatosensory, and motor cortices that help it control several cognitive functions [for review, see: (Tanji & Hoshi, 2008)]. Similarly, the PMd can integrate information from the prefrontal and motor cortex, acting as a link, in action planning and decision making [for review, see: (Hoshi & Tanji, 2007)]. Therefore, it is important to study the cortical pathways between these areas because of their common role in processing and selecting relevant information in tasks requiring decision-making. Using neural tract-tracing, immunolabeling and microscopy in rhesus monkeys (M. mulatta), we assessed the distribution and layer-specific organization of projection neurons from LPFC area 46 and PMd area 6 directed to the LPFC area 9. Our study revealed that projection neurons to area 9 were found originating from upper (L2-3) and deep (L5-6) layers of both areas, but with a slight upper layer bias. We found that the LPFC area 46 had a higher density of projection neurons directed to LPFC area 9 compared to the PMd area 6. Additionally, our data also revealed laminar differences in the perisomatic parvalbumin (PV) inhibitory inputs onto area 9 projection neurons, which were dependent on area of origin. Within ventral LPFC area 46, perisomatic PV+ inhibitory inputs onto upper layer projection neurons to area 9 was greater than those onto deep layer projection neurons. The opposite pattern was found for PMd area 6DR, where perisomatic PV+ inhibition onto deep layer projection neurons to area 9 was greater than those onto upper layer neurons. These findings provide additional insights into the layer-specific organization of prefrontal and premotor pathways that play an important role in action planning and decision-making.
378

Investigating Causes of Jitter in Container Networking / Undersökning av orsaker till jitter i containernätverken

Maurer, Felix January 2021 (has links)
Clustered container infrastructures are increasingly popular for deploying applications. The networking in these clusters is provided by specialized container networking solutions that often lead to complex network configurations on the nodes hosting the containers. Thereby, they can have a significant impact on the performance of the applications hosted in the cluster. While the throughput that can be achieved by the container networking solutions is regularly studied, the latency and subsequently jitter introduced by them is often underreported. This thesis investigates the latency and jitter introduced by the packet processing in the Linux kernel using different container networking solutions. This requires very detailed data about the processing of packets, which existing tracing tools for Linux fail to provide. Therefore, a custom tracing application is developed using eBPF that focuses on the flow of packets through the kernel. The application is evaluated and then used to compare the latency and jitter behavior of commonly used container networking solutions. The results show that the choice of transport protocols for real-time applications has a significant impact on the latency introduced by the kernel irrespective of the container networking. Also, some container networking solutions fall short of providing their proclaimed benefits in their default configurations. This highlights the need for performance evaluation in environments representative of the production setting and the need for tuning the configuration of container networking solutions and system resources to match the requirements of real-time use cases. The data also show that there is a need for more lightweight tracing technologies for packet processing. / Klustrade containerinfrastrukturer blir alltmer populära för att distribuera applikationer. Nätverket i dessa kluster tillhandahålls av specialiserade nätverkslösningar för containrar, vilket ofta leder till komplexa nätverkskonfigurationer på de noder som är värdar för containrarna. De kan därför ha en betydande inverkan på prestandan hos de applikationer som finns i klustret. Även om den bandbredd som kan uppnås med olika nätverkslösningar för containrar regelbundet studeras, är den latens och det jitter som de medför ofta underrapporterade. I den här avhandlingen undersöks den latens och jitter som introduceras av pakethanteringen i Linux-kärnan med hjälp av olika nätverkslösningar för containrar. Detta kräver mycket detaljerade uppgifter om pakethanteringen, som befintliga spårningsverktyg för Linux inte kan tillhandahålla. Därför utvecklas ett anpassat spårningsprogram med eBPF som fokuserar på flödet av paket genom kärnan. Programmet utvärderas och används sedan för att jämföra latens- och jitterbeteendet hos vanliga containernätverkslösningar. Resultaten visar att valet av transportprotokoll för realtidsapplikationer har en betydande inverkan på den latens som införs av kärnan, oberoende av containernätverken. Vissa lösningar för containernätverk klarar inte heller av att ge de fördelar de utlovat i sina standardkonfigurationer. Detta understryker behovet av att utvärdera prestanda i miljöer som är representativa för produktionsmiljöer och behovet av att justera konfigurationen av containernätverkslösningar och systemresurser så att de motsvarar kraven för realtidsbaserade användningsfall. Resultaten visar också att det finns ett behov av mindre resurskrävande spårningstekniker för pakethantering.
379

Development of a Hybrid, Finite Element and Discrete Particle-Based Method for Computational Simulation of Blood-Endothelium Interactions in Sickle Cell Disease

Blakely, Ian Patrick 10 August 2018 (has links)
Sickle cell disease (SCD) is a severe genetic disease, affecting over 100,000 in the United States and millions worldwide. Individuals suffer from stroke, acute chest syndrome, and cardiovascular complications. Much of these associated morbidities are primarily mediated by blockages of the microvasculature, events termed vaso-occlusive crises (VOCs). Despite its prevalence and severity, the pathophysiological mechanisms behind VOCs are not well understood, and novel experimental tools and methods are needed to further this understanding. Microfluidics and computational fluid dynamics (CFD) are rapidly growing fields within biomedical research that allow for inexpensive simulation of the in vivo microenvironment prior to animal or clinical trials. This study includes the development of a CFD model capable of simulating diseased and healthy blood flow within a series of microfluidic channels. Results will be utilized to further improve the development of microfluidic systems.
380

Hybrid Rendering in 3D Map-Based Grand Strategy Games / Hybrid renderering i 3D kartbaserade strategispel

Buckard, Kajsa January 2022 (has links)
Ray tracing comes together with a tremendous computational cost [1]. Therefore, Keller et al., expressed that possible cost reduction appears when a hybrid rendering pipeline is implemented by combining rasterization and ray tracing, which have already been introduced to the film and game industries. Such a rendering method within Grand Strategy Games (GSG) has been an unexplored task. The standard rendering method of GSG has been rasterization. Implementing hybrid rendering for GSG would allow this niche to follow the continuously developing rendering techniques. Therefore, this thesis examined the advantages and disadvantages of hybrid rendering compared to a path traced pipeline. The study measured different camera angles applied to three GSG-inspired scenes by rendering time and quality according to pixelby-pixel comparison focusing on effects like shadows and reflections. Closeup images have been taken on the rendered scenes to evaluate interesting pieces in the scenes. Steady time performance for all angles was the significant advantage of the hybrid pipeline. The angles at lower grades resulted in an increased difference in shadows and reflections for two out of three scenes. Additionally, the entire pixel-by-pixel comparison did not generate more than ten percent difference for any scene and not more than twelve percent difference on closeup images. Still, differences were noticeable to the eye since the path tracer was superior for developing sharp shadows. The hybrid pipeline generated a massive reflection compared to the path tracer. Since the path tracer was defined as the ground truth, this quantity of reflections was not considered positive. The thesis concludes that a simple hybrid rendering pipeline could be an exciting future for GSG, especially for angles above 67.25◦ . Additionally, improving the sharpness of the shadows for the hybrid rendering pipeline could increase the interest in hybrid rendering for GSG even at angles below 67.25◦ . Some interesting future work is rendering advanced 3D map-based GSG scenes, including more shadows and reflections. Another suggestion is a qualitative analysis of users playing a game with the two rendering pipelines before attending a user study about their possible improved graphical experience and how the game experience has been affected. / Strålspårning kommer tillsammans med en stor beräkningskostnad [1]. Därför har Keller m.fl. uttryckt att kostnaderna kan reduceras genom att implementera en hybridrenderingsmetod baserad på en kombination av rastrering och strålspårning, vilket redan har introducerats till film- och spelindustrin. En sådan renderingsmetod inom Grand Strategy Games (GSG) har dock varit ett outforskat område. Standard renderingsmetoden för GSG har varit rastrering. Implementering av hybridrendering för GSG skulle tillåta denna nisch att följa de ständigt utvecklande renderingsteknikerna. Därför undersöker denna avhandling fördelarna och nackdelarna med hybridrendering jämfört med en renderingspipeline baserad på strålspårning. Studien har mätt olika kameravinklar applicerade på GSG-inspirerade scener mätt med renderingstid och kvalitet enligt pixel-för-pixel-jämförelse och med fokus på effekter som skuggor och reflektioner. Närbilder har tagits på de renderade scenerna för att utvärdera intressanta delar i scenerna. Stabil tidprestanda av samtliga vinklar var den betydande fördelen med hybridpipelinen. Vinklarna vid lägre grader resulterade i ökad differens av skuggor och reflektioner för två av tre scener. Dessutom resulterade hela pixeljämförelsen inte mer än tio procents skillnad för någon av scenerna och inte mer än tolv procents skillnad på närbilderna. Ändå var skillnaderna märkbara för ögat eftersom strålspårningen var överlägsen för att generera skarpa skuggor. Hybridlösningen genererade en stor andel reflektion jämfört med strålspårningen. Eftersom strålspårningen definierades som målbilden var denna mängd reflektioner inte positiva. Avhandlingen drar slutsatsen att en enkel hybridmetod kan vara en spännande framtid för GSG, speciellt för vinklar över 67,25◦ . Dessutom kan en förbättring av skärpan på skuggorna för hybridrenderingen öka intresset för hybridrendering för GSG även vid vinklar under 67,25◦ . Intressanta framtida arbeten är rendering av avancerade GSG scener, som inkluderar fler skuggor och reflektioner. Ett till förslag är kvalitativ analys av användare som spelar ett spel med de två renderingsmetoder följt av användarstudie om deras möjliga förbättrade grafiska upplevelse och om spelupplevelsen har drabbats.

Page generated in 0.0529 seconds