• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 13
  • 12
  • 7
  • 6
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 65
  • 52
  • 35
  • 26
  • 24
  • 24
  • 23
  • 21
  • 20
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Maximizing performance gain of Variable Rate Shading tier 2 while maintaining image quality : Using post processing effects to mask image degradation

Lind, Fredrik, Diaz Escalante, Andrés January 2021 (has links)
Background. Performance optimization is of great importance for games as it constrains the possibilities of content or complexity of systems. Modern games support high resolution rendering but higher resolutions require more pixels to be computed and solutions are needed to reduce this workload. Currently used methods include uniformly lowering the shading rates across the whole screen to reduce the amount of pixels needing computation. Variable Rate Shading is a new hardware supported technique with several functionality tiers. Tier 1 is similar to previous methods in that it lowers the shading rate for the whole screen. Tier 2 supports screen space image shading. With tier 2 screen space image shading, various different shading rates can be set across the screen which gives developers the choice of where and when to set specific shading rates. Objectives. The aim of this thesis is to examine how close Variable Rate Shading tier 2 screen space shading can come to the performance gains of Variable Rate Shading tier 1 while trying to maintain an acceptable image quality with the help of commonly used post processing effects. Methods. A lightweight scene is set up and Variable Rate Shading tier 2 methods are set to an acceptable image quality as baseline. Evaluation of performance is done by measuring the times of specific passes required by and affected by Variable Rate Shading. Image quality is measured by capturing sequences of images with no Variable Rate Shading on as reference, then with Variable Rate Shading tier 1 and several methods with tier 2 to be compared with Structural Similarity Index. Results. Highest measured performance gains from tier 2 was 28.0%. The result came from using edge detection to create the shading rate image in 3840x2160 resolution. This translates to 36.7% of the performance gains of tier 1 but with better image quality with SSIM values of 0.960 against tier 1’s 0.802, which corresponds to good and poor image quality respectively. Conclusions. Variable Rate Shading tier 2 shows great potential in increasing performance while maintaining image quality, especially with edge detection. Postprocessing effects are effective at maintaining a good image quality. Performance gains also scale well as they increase with higher resolutions. / Bakgrund. Prestandaoptimering är väldigt viktigt för spel eftersom det kan begränsa möjligheterna av innehåll eller komplexitet av system. Moderna spel stödjer rendering för höga upplösningar men höga upplösningar kräver beräkningar för mera pixlar och lösningar behövs för att minska arbetsbördan. Metoder som för närvarande används omfattar bland annat enhetlig sänkning av skuggningsförhållande över hela skärmen för att minska antalet pixlar som behöver beräkningar. Variable Rate Shading är en ny hårdvarustödd teknik med flera funktionalitetsnivåer. Nivå 1 är likt tidigare metoder eftersom skuggningsförhållandet enhetligt sänks över hela skärmen. Nivå 2 stödjer skärmrymdsbildskuggning. Med skärmrymdsbildskuggning kan skuggningsförhållanden varieras utspritt över skärmen vilket ger utvecklare valmöjligheter att bestämma var och när specifika skuggningsförhållanden ska sättas. Syfte. Syftet med examensarbetet är att undersöka hur nära Variable Rate Shading nivå 2 skärmrymdsbildskuggning kan komma prestandavinsterna av Variable Rate Shading nivå 1 samtidigt som bildkvaliteten behålls acceptabel med hjälp av vanligt använda efterbearbetningseffekter. Metod. En simpel scen skapades och metoder för Variable Rate Shading nivå 2 sattes till en acceptabel bildkvalitet som utgångspunkt. Utvärdering av prestanda gjordes genom att mäta tiderna för specifika pass som behövdes för och påverkades av Variable Rate Shading. Bildkvalitet mättes genom att spara bildsekvenser utan Variable Rate Shading på som referensbilder, sedan med Variable Rate Shading nivå 1 och flera metoder med nivå 2 för att jämföras med Structural Similarity Index. Resultat. Högsta uppmätta prestandavinsten från nivå 2 var 28.0%. Resultatet kom ifrån kantdetektering för skapandet av skuggningsförhållandebilden, med upplösningen 3840x2160. Det motsvarar 36.7% av prestandavinsten för nivå 1 men med mycket bättre bildkvalitet med SSIM-värde på 0.960 gentemot 0.802 för nivå 1, vilka motsvarar bra och dålig bildkvalitet. Slutsatser. Variable Rate Shading nivå 2 visar stor potential i prestandavinster med bibehållen bildkvalitet, speciellt med kantdetektering. Efterbearbetningseffekter är effektiva på att upprätthålla en bra bildkvalitet. Prestandavinster skalar även bra då de ökar vid högre upplösningar.
142

Towards a Multifaceted Understanding of Host Resistance and Pathogenicity in Rice Sheath Blight and Blast Diseases

Lee, Dayoung 28 August 2019 (has links)
No description available.
143

Structural Condition Assessment of a Parking Deck using Ground Penetrating Radar

Neupane, Garima 03 August 2020 (has links)
No description available.
144

Intimt eller sexuellt deepfakematerial? : En analys av fenomenet ‘deepfake pornografi’ som digitalt sexuellt övergrepp inom det EU-rättsliga området / Intimate or sexual deepfake material? : An analysis of the phenomenon ’deepfake pornography’ as virtual sexual abuse in the legal framework of the European Union

Skoghag, Emelie January 2023 (has links)
No description available.
145

Interactive Depth-Aware Effects for Stereo Image Editing

Abbott, Joshua E. 24 June 2013 (has links) (PDF)
This thesis introduces methods for adding user-guided depth-aware effects to images captured with a consumer-grade stereo camera with minimal user interaction. In particular, we present methods for highlighted depth-of-field, haze, depth-of-field, and image relighting. Unlike many prior methods for adding such effects, we do not assume prior scene models or require extensive user guidance to create such models, nor do we assume multiple input images. We also do not require specialized camera rigs or other equipment such as light-field camera arrays, active lighting, etc. Instead, we use only an easily portable and affordable consumer-grade stereo camera. The depth is calculated from a stereo image pair using an extended version of PatchMatch Stereo designed to compute not only image disparities but also normals for visible surfaces. We also introduce a pipeline for rendering multiple effects in the order they would occur physically. Each can be added, removed, or adjusted in the pipeline without having to reapply subsequent effects. Individually or in combination, these effects can be used to enhance the sense of depth or structure in images and provide increased artistic control. Our interface also allows editing the stereo pair together in a fashion that preserves stereo consistency, or the effects can be applied to a single image only, thus leveraging the advantages of stereo acquisition even to produce a single photograph.
146

Preclinical Incorporation Dosimetry of [18F]FACH—A Novel 18F-Labeled MCT1/MCT4 Lactate Transporter Inhibitor for Imaging Cancer Metabolism with PET

Sattler, Bernhard, Kranz, Mathias, Wenzel, Barbara, Jain, Nalin T., Moldovan, Rare¸s-Petru, Toussaint, Magali, Deuther-Conrad, Winnie, Ludwig, Friedrich-Alexander, Teodoro, Rodrigo, Sattler, Tatjana, Sadeghzadeh, Masoud, Sabri, Osama, Brust, Peter 20 April 2023 (has links)
Overexpression of monocarboxylate transporters (MCTs) has been shown for a variety of human cancers (e.g., colon, brain, breast, and kidney) and inhibition resulted in intracellular lactate accumulation, acidosis, and cell death. Thus, MCTs are promising targets to investigate tumor cancer metabolism with positron emission tomography (PET). Here, the organ doses (ODs) and the effective dose (ED) of the first 18F-labeled MCT1/MCT4 inhibitor were estimated in juvenile pigs. Whole-body dosimetry was performed in three piglets (age: ~6 weeks, weight: ~13–15 kg). The animals were anesthetized and subjected to sequential hybrid Positron Emission Tomography and Computed Tomography (PET/CT) up to 5 h after an intravenous (iv) injection of 156 ± 54 MBq [18F]FACH. All relevant organs were defined by volumes of interest. Exponential curves were fitted to the time–activity data. Time and mass scales were adapted to the human order of magnitude and the ODs calculated using the ICRP 89 adult male phantom with OLINDA 2.1. The ED was calculated using tissue weighting factors as published in Publication 103 of the International Commission of Radiation Protection (ICRP103). The highest organ dose was received by the urinary bladder (62.6 ± 28.9 µSv/MBq), followed by the gall bladder (50.4 ± 37.5 µSv/MBq) and the pancreas (30.5 ± 27.3 µSv/MBq). The highest contribution to the ED was by the urinary bladder (2.5 ± 1.1 µSv/MBq), followed by the red marrow (1.7 ± 0.3 µSv/MBq) and the stomach (1.3 ± 0.4 µSv/MBq). According to this preclinical analysis, the ED to humans is 12.4 µSv/MBq when applying the ICRP103 tissue weighting factors. Taking into account that preclinical dosimetry underestimates the dose to humans by up to 40%, the conversion factor applied for estimation of the ED to humans would rise to 20.6 µSv/MBq. In this case, the ED to humans upon an iv application of ~300 MBq [18F]FACH would be about 6.2 mSv. This risk assessment encourages the translation of [18F]FACH into clinical study phases and the further investigation of its potential as a clinical tool for cancer imaging with PET.
147

Interactive Mesostructures

Nykl, Scott L. January 2013 (has links)
No description available.
148

Controllable 3D Effects Synthesis in Image Editing

Yichen Sheng (18184378) 15 April 2024 (has links)
<p dir="ltr">3D effect synthesis is crucial in image editing to enhance realism or visual appeal. Unlike classical graphics rendering, which relies on complete 3D geometries, 3D effect synthesis in im- age editing operates solely with 2D images as inputs. This shift presents significant challenges, primarily addressed by data-driven methods that learn to synthesize 3D effects in an end-to-end manner. However, these methods face limitations in the diversity of 3D effects they can produce and lack user control. For instance, existing shadow generation networks are restricted to produc- ing hard shadows without offering any user input for customization.</p><p dir="ltr">In this dissertation, we tackle the research question: <i>how can we synthesize controllable and realistic 3D effects in image editing when only 2D information is available? </i>Our investigation leads to four contributions. First, we introduce a neural network designed to create realistic soft shadows from an image cutout and a user-specified environmental light map. This approach is the first attempt in utilizing neural network for realistic soft shadow rendering in real-time. Second, we develop a novel 2.5D representation Pixel Height, tailored for the nuances of image editing. This representation not only forms the foundation of a new soft shadow rendering pipeline that provides intuitive user control, but also generalizes the soft shadow receivers to be general shadow receivers. Third, we present the mathematical relationship between the Pixel Height representation and 3D space. This connection facilitates the reconstruction of normals or depth from 2D scenes, broadening the scope for synthesizing comprehensive 3D lighting effects such as reflections and refractions. A 3D-aware buffer channels are also proposed to improve the synthesized soft shadow quality. Lastly, we introduce Dr.Bokeh, a differentiable bokeh renderer that extends traditional bokeh effect algorithms with better occlusion modeling to correct flaws existed in existing methods. With the more precise lens modeling, we show that Dr.Bokeh not only achieves the state-of-the-art bokeh rendering quality, but also pushes the boundary of depth-from-defocus problem.</p><p dir="ltr">Our work in controllable 3D effect synthesis represents a pioneering effort in image editing, laying the groundwork for future lighting effect synthesis in various image editing applications. Moreover, the improvements to filtering-based bokeh rendering could significantly enhance com- mercial products, such as the portrait mode feature on smartphones.</p>
149

Advances in Modelling, Animation and Rendering

Vince, J.A., Earnshaw, Rae A. January 2002 (has links)
No / This volume contains the papers presented at Computer Graphics International 2002, in July, at the University of Bradford, UK. These papers represent original research in computer graphics from around the world.
150

Roadmark reconstruction from stereo-images acquired by a ground-based mobile mapping system

Soheilian Khorzoughi, Bahman 01 April 2008 (has links) (PDF)
Despite advances in ground-based Mobile Mapping System (MMS), automatic feature reconstruction seems far from being reached. In this thesis, we focus on 3D roadmark reconstruction from images acquired by road looking cameras of a MMS stereo-rig in dense urban context. A new approach is presented, that uses 3D geometric knowledge of roadmarks and provides a centimetric 3D accuracy with a low level of generalisation. Two classes of roadmarks are studied: zebra-crossing and dashed-lines. The general strategy consists in three main steps. The first step provides 3D linked-edges. Edges are extracted in the left and right images. Then a matching algorithm that is based on dynamic programming optimisation matches the edges between the two images. A sub-pixel matching is computed by post processing and 3D linked-edges are provided by classical photogrammetric triangulation. The second step uses the known specification of roadmarks to perform a signature based filtering of 3D linked-edges. This step provides hypothetical candidates for roadmark objects. The last step can be seen as a validation step that rejects or accepts the candidates. The validated candidates are finely reconstructed. The adopted model consists of a quasi parallelogram for each strip of zebra-crossing or dashed-line. Each strip is constrained to be flat but the roadmark as a whole is not planar. The method is evaluated on a set of 150 stereo-pairs acquired in a real urban area under normal traffic conditions. The results show the validity of the approach in terms of robustness, completeness and geometric accuracy. The method is robust and deals properly with partially occluded roadmarks as well as damaged or eroded ones. The detection rate reaches 90% and the 3D accuracy is about 2-4 cm. Finally an application of reconstructed roadmarks is presented. They are used in georeferencing of the system. Most of the MMSs use direct georeferencing devices such as GPS/INS for their localisation. However in urban areas masks and multi-path errors corrupt the measurements and provide only 50 cm accuracy. In order to improve the localisation quality, we aim at matching ground-based images with calibrated aerial images of the same area. For this purpose roadmarks are used as matching objects. The validity of this method is demonstrated on a zebra-crossing example

Page generated in 0.035 seconds