• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Brian De Palma : une esthétique de la violence? / Brian De Palma : an aesthetic of violence ?

Bchir, Aroussia 24 October 2016 (has links)
Approche esthétique de l’œuvre cinématographique de Brian De Palma. La problématique s’articule entre esthétique du montage et violence de l’image. En premier lieu, le texte interroge le mode de découpage privilégié par Brian De Palma en soulignant l'importance du plan-séquence et du split screen. L'usage du plan-séquence est, notamment, rapporté à la question du défaut de vision, phénomène considéré comme central. Un second moment de cette thèse est consacré à l'étude des personnages. Des personnages anti-héros, marginaux. L'accent est particulièrement mis sur le corps féminin. Regard et voyeurisme revient à la question du découpage privilégié par Brian De Palma. Comment Brian De Palma utilise-t-il le regard pour accéder à la violence ? Qu’est-ce que regarder chez Brian De Palma ? Comment les éléments voyeuristes depalmiens se construisent-ils à partir du langage cinématographique ? Le cinéma de Brian De Palma s’annonce aussi savant et complexe, entre classicisme et modernisme. Comment Brian De Palma travaille-t-il l’œuvre hitchcockienne pour offrir une conception nouvelle ? Comment violenter l’image pour extraire son invisible ? / An esthetically pleasing approach to Brian De Palma's cinematographic work. The issue revolves around editing aesthetics and image violence. First, the text questions the cutting mode favored by Brian De Palme, stressing the importance of sequence-shot and split screen. The use of sequence-shot is in particular brought by the issue of lack of vision, a phenomenon considered as central. A second point of this thesis is devoted to the study of the characters. Anti-hero characters, drop outs. Emphasis is particularly placed on the female body. And voyeuristic gaze returns to the issue of cutting by Brian De Palma. How does Brian De Palma use eyes to see violence ? What is the meaning of « looking » to Brian De Palm a? How are De Palma's voyeuristic elements constructed from film language ? Brian De Palma's film also promises to be clever and complex, between classicism and modernism. How does Brian De Palma use Hitchcock's work in order to offer a new design? How assaulting the image to get its unseen part ?
32

Construcción de la Imagen: El uso de la luz natural bajo la perspectiva de Emmanuel Lubezki en la película El Renacido / Construction of the Image: The use of natural light under the perspective of Emmanuel Lubezki in the filme The Reborn

Reynoso Pacheco, Helen Carolina 23 November 2020 (has links)
Este trabajo de investigación analiza el fenómeno comunicacional del estilo fotográfico de Emmanuel Lubezki a través de las aplicaciones narrativas y expresivas de la cámara en la película ‘‘El Renacido’’. Cabe mencionar que se analiza el trabajo de Lubezki partiendo de la luz como elemento fundamental de expresividad. De tal modo que se logra tener una presencia de la belleza panorámica a través de los paisajes naturales que modifican las actitudes receptivas y emocionales del espectador. / This research paper analyzes the communicational phenomenon of Emmanuel Lubezki photographic style through the camera’s narrative and expressive applications in the film 'The Reborn''. It is worth mentioning that Lubezki’s work is analyzed from light as a fundamental element of expressiveness. In such a way that it is possible to have a presence of the panoramic beauty through the natural landscapes that modify the receptive and emotional attitudes of the spectator. / Trabajo de investigación
33

Controllable 3D Effects Synthesis in Image Editing

Yichen Sheng (18184378) 15 April 2024 (has links)
<p dir="ltr">3D effect synthesis is crucial in image editing to enhance realism or visual appeal. Unlike classical graphics rendering, which relies on complete 3D geometries, 3D effect synthesis in im- age editing operates solely with 2D images as inputs. This shift presents significant challenges, primarily addressed by data-driven methods that learn to synthesize 3D effects in an end-to-end manner. However, these methods face limitations in the diversity of 3D effects they can produce and lack user control. For instance, existing shadow generation networks are restricted to produc- ing hard shadows without offering any user input for customization.</p><p dir="ltr">In this dissertation, we tackle the research question: <i>how can we synthesize controllable and realistic 3D effects in image editing when only 2D information is available? </i>Our investigation leads to four contributions. First, we introduce a neural network designed to create realistic soft shadows from an image cutout and a user-specified environmental light map. This approach is the first attempt in utilizing neural network for realistic soft shadow rendering in real-time. Second, we develop a novel 2.5D representation Pixel Height, tailored for the nuances of image editing. This representation not only forms the foundation of a new soft shadow rendering pipeline that provides intuitive user control, but also generalizes the soft shadow receivers to be general shadow receivers. Third, we present the mathematical relationship between the Pixel Height representation and 3D space. This connection facilitates the reconstruction of normals or depth from 2D scenes, broadening the scope for synthesizing comprehensive 3D lighting effects such as reflections and refractions. A 3D-aware buffer channels are also proposed to improve the synthesized soft shadow quality. Lastly, we introduce Dr.Bokeh, a differentiable bokeh renderer that extends traditional bokeh effect algorithms with better occlusion modeling to correct flaws existed in existing methods. With the more precise lens modeling, we show that Dr.Bokeh not only achieves the state-of-the-art bokeh rendering quality, but also pushes the boundary of depth-from-defocus problem.</p><p dir="ltr">Our work in controllable 3D effect synthesis represents a pioneering effort in image editing, laying the groundwork for future lighting effect synthesis in various image editing applications. Moreover, the improvements to filtering-based bokeh rendering could significantly enhance com- mercial products, such as the portrait mode feature on smartphones.</p>
34

Systeme d'imagerie hybride par codage de pupille / Hybrid imaging system with wavefront coding

Diaz, Frédéric 06 May 2011 (has links)
De nouveaux concepts d’imagerie permettent aux systèmes optiques d’être plus compacts et plus performants. Parmi ces nouvelles techniques, les systèmes d’imagerie hybrides par codage de pupille allient un système optique comprenant un masque de phase et un traitement numérique. La fonction de phase implantée sur le masque rend l’image insensible à un défaut du système optique, qui peut être une aberration ou de la défocalisation. Cet avantage est obtenu au prix d’une déformation connue de l’image qui est ensuite corrigée par un traitement numérique.L’étude des propriétés de ces systèmes a été effectuée en cherchant à augmenter la profondeur de champ d’un système d’imagerie. Un gain sur ce paramètre permet déjà d’envisager le relâchement de contraintes de conception optique telles que la courbure de champ, la défocalisation thermique, le chromatisme… Dans ces techniques d’imagerie, la prise en compte du bruit du capteur constitue l’un des paramètres critiques pour le choix et l’utilisation de méthodes de traitement d’image.Les travaux menés durant cette thèse ont permis de proposer une approche originale de conception conjointe de la fonction de phase du masque et de l’algorithme de restauration d’image. Celle-ci est basée sur un critère de rapport signal à bruit de l’image finale. Contrairement aux approches connues, ce critère montre qu’il n’est pas nécessaire d’obtenir une stricte invariance de la fonction de transfert du système optique. Les paramètres des fonctions de phase optimisés grâce à ce critère sont sensiblement différents de ceux usuellement proposés et conduisent à une amélioration significative de la qualité de l’image.Cette approche de conception optique a été validée expérimentalement sur une caméra thermique non refroidie. Un masque de phase binaire qui a été mis en œuvre en association avec un traitement numérique temps réel implémenté sur une carte GPU a permis d’augmenter la profondeur de champ de cette caméra d’un facteur 3. Compte-tenu du niveau de bruit important introduit par l’utilisation d’un capteur bolométrique, la bonne qualité des images obtenues après traitement démontre l’intérêt de l’approche de conception conjointe appliquée à l’imagerie hybride par codage de pupille. / New imaging techniques allow better and smaller systems. Among these new techniques, hybrid imaging systems with wavefront coding includes an optical system with a phase mask and a processing step. The phase function of the mask makes the system insensitive to a fault of the optical system, such as an aberration or a defocus. The price of this advantage is a deformation of the image acquired by a sensor, which is then processed. The study of the properties of these hybrid imaging systems has been completed by increasing the depth of field of an imaging system, which allows to relax some design constraints such as field curvature, thermal defocus, chromaticism… In these imaging techniques, the consideration the noise of the sensor is one the critical parameters when choosing the image processing method.The work performed during this thesis allowed to proposed an original approach for the cross-conception of the phase function of the mask and the processing step. This approach is based on a signal-to-noise criterion. Unlike known approaches, this criterion shows that a strict insensitivity of the modulation transfer function of the optics is not required. The parameters of the phase functions optimized thanks to this criterion are noticeably different from those usually proposed and lead to a significant increase of the image quality.This cross-conception approach has been validated experimentally on an uncooled thermal camera. A binary phase mask associated with a real-time processing implemented on a GPU allowed to increase the depth of field of this camera by a factor 3. Considering the important level of noise introduced by the use of a bolometric sensor, the good quality of the processed image shows the interest of the cross-conception for hybrid imaging system with wavefront coding.
35

Simulace vlastností objektivu / Simulation of Lens Features

Kučiš, Michal January 2012 (has links)
Computer vision algorithms typically process real world image data acquired by cameras or video cameras. Such image data suffer from imperfections cause by the acquisition process. This paper focuses on simulation of the acquisition process on simulation of the acquisition process in order to enable rendering of images based on a 3D generated model. Imperfections, such as geometry distorion, chromatic aberration, depth of field effect, motion blur, vignetting and lens flare are considered.
36

Real-time Depth of Field with Realistic Bokeh : with a Focus on Computer Games / Realtids Skärpedjup med Realistisk Bokeh : med ett Fokus på Datorspel

Christoffersson, Anton January 2020 (has links)
Depth of field is a naturally occurring effect in lenses describing the distance between theclosest and furthest object that appears in focus. The effect is commonly used in film andphotography to direct a viewers focus, give a scene more complexity, or to improve aes-thetics. In computer graphics, the same effect is possible, but since there are no naturaloccurrences of lenses in the virtual world, other ways are needed to achieve it. There aremany different approaches to simulate depth of field, but not all are suited for real-time usein computer games. In this thesis, multiple methods are explored and compared to achievedepth of field in real-time with a focus on computer games. The aspect of bokeh is alsocrucial when considering depth of field, so during the thesis, a method to simulate a bokeheffect similar to reality is explored. Three different methods based on the same approachwas implemented to research this subject, and their time and memory complexity weremeasured. A questionnaire was performed to measure the quality of the different meth-ods. The result is three similar methods, but with noticeable differences in both quality andperformance. The results give the reader an overview of different methods and directionsfor implementing it on their own, based on which requirements suits them.
37

Zobrazení kulečníku pomocí distribuovaného sledování paprsku / Rendering Biliard Balls Using Distributed Ray Tracing

Krivda, Marian January 2009 (has links)
This thesis is concerned in the method of realistic rendering using a distributed raytracing. This method simulates various visual effects and generates high realistic 2D images. The work analyses the problem and explains principles of solution related to this technique. There is also descriprion of the method of simple reytracing which provides a basis for the distributed raytracing. A part of work is specialized for optimalization of distributed raytracing.
38

Development of a Z-Stack Projection Imaging Protocol for a Nerve Allograft

Selvam, Selvaanish 31 August 2018 (has links)
No description available.
39

Reconstruction 3-D de surfaces à partir de séquences d'images 2-D acquises par sectionnement optique - Application à l'endothélium cornéen humain ex-vivo observé en microscopie optique conventionnelle / 3-D reconstruction of surfaces from sequences of 2-D images acquired by optical sectioning - Application to the human ex-vivo corneal endothelium observed by conventional optical microscopy

Farnandes, Mathieu 01 February 2011 (has links)
Dans le circuit de la greffe de cornée, l'endothélium de chaque greffon est observé en microscopie optique conventionnelle afin de vérifier que sa densité cellulaire est suffisante pour maintenir une bonne transparence après l'opération. Les greffons étant conservés dans un milieu spécifique, ils sont imprégnés de liquide et présentent donc des plis qui perturbent l'observation et le comptage des cellules. Ce problème pratique est à l'origine d’une étude théorique sur les concepts de profondeur de champ étendue et de shape-from-focus. A partir d'une séquence d'images acquise par sectionnement optique, les informations les plus nettes permettent d'une part d'accéder à la topographie de la surface observée et d'autre part de restaurer l'image de sa texture. Une reconstruction surfacique 3-D est alors obtenue en projetant la texture sur la topographie. Cette thèse considère essentiellement l’étape fondamentale de mesure de netteté du processus de reconstruction. Des nouvelles mesures génériques offrant une haute sensibilité à la netteté sont introduites. De par une stratégie 3-D originale au travers de la séquence d'images, une autre mesure très robuste au bruit est proposée. Toutes ces mesures sont testées sur des données simulées puis diverses acquisitions réelles en microscopie optique conventionnelle et comparées aux méthodes de la littérature. Par ailleurs, la mesure 3-D améliore nettement les reconstructions d'endothéliums cornéens à partir de leurs acquisitions particulièrement perturbées (inversions de contraste). Un processus itératif complet de reconstruction 3-D d’endothéliums cornéens est finalement décrit, aboutissant à des résultats solides et exploitables. / In the cornea transplant process, each graft endothelium is observed by conventional optical microscopy to check that its cell density is sufficient to maintain a proper transparency after the transplantation. The grafts are stored in a specific preservation medium, they are thus impregnated with fluid and therefore exhibit folds which make cell observation and counting difficult. This practical issue led to the following theoretical study about the so-called concepts: extended-depth-of-field and shape-from-focus. Throughout a sequence of images acquired by optical sectioning, the in-focus information allows on the one hand to recover the topography of the observed surface and on the other hand to restore the image of its texture. A 3-D reconstruction is then obtained by mapping the texture onto the topography. This thesis basically considers the fundamental step of the reconstruction process that is the focus measurement. New generic focus measurements exhibiting high sharpness sensitivity are introduced. Another one offering high noise robustness is proposed, due to an original 3-D strategy through the image sequence, unlike traditional methods that operate in 2-D. All of them are tested on simulated data and various real acquisitions, and compared to the state-of-the-art methods. Furthermore, the aforementioned 3-D focus measurement clearly improves the 3-D surface reconstructions of the corneal endotheliums from their particularly disturbed acquisitions (contrast reversals). A complete iterative process of 3-D reconstruction of the corneal endothelial surfaces is finally described, resulting in solid results that can already be transferred to cornea banks.
40

An empirically derived system for high-speed rendering

Rautenbach, Helperus Ritzema 25 September 2012 (has links)
This thesis focuses on 3D computer graphics and the continuous maximisation of rendering quality and performance. Its main focus is the critical analysis of numerous real-time rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shader-based special effects, lighting effects, shadows, reflection and refraction, post-processing effects and the processing of physics. This critical analysis allows us to assess the relationship between rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Using this performance data, gathered during the analysis of various rendering algorithms, we are able to define a selection engine to control the real-time cycling of rendering algorithms and special effects groupings based on environmental conditions. Furthermore, as a proof of concept, to balance Central Processing Unit (CPU) and Graphic Processing Unit (GPU) load for and increased speed of execution, our selection system unifies the GPU and CPU as a single computational unit for physics processing and environmental mapping. This parallel computing system enables the CPU to process cube mapping computations while the GPU can be tasked with calculations traditionally handled solely by the CPU. All analysed and benchmarked algorithms were implemented as part of a modular rendering engine. This engine offers conventional first-person perspective input control, mesh loading and support for shader model 4.0 shaders (via Microsoft’s High Level Shader Language) for effects such as high dynamic range rendering (HDR), dynamic ambient lighting, volumetric fog, specular reflections, reflective and refractive water, realistic physics, particle effects, etc. The test engine also supports the dynamic placement, movement and elimination of light sources, meshes and spatial geometry. Critical analysis was performed via scripted camera movement and object and light source additions – done not only to ensure consistent testing, but also to ease future validation and replication of results. This provided us with a scalable interactive testing environment as well as a complete solution for the rendering of computationally intensive 3D environments. As a full-fledged game engine, our rendering engine is amenable to first- and third-person shooter games, role playing games and 3D immersive environments. Evaluation criteria (identified to access the relationship between rendering quality and performance), as mentioned, allows us to effectively cycle algorithms based on empirical results and to distribute specific processing (cube mapping and physics processing) between the CPU and GPU, a unification that ensures the following: nearby effects are always of high-quality (where computational resources are available), distant effects are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised. The implication of our work is clear: unifying the CPU and GPU and dynamically cycling through the most appropriate algorithms based on ever-changing environmental conditions allow for maximised rendering quality and performance and shows that it is possible to render high-quality visual effects with realism, without overburdening scarce computational resources. Immersive rendering approaches used in conjunction with AI subsystems, game networking and logic, physics processing and other special effects (such as post-processing shader effects) are immensely processor intensive and can only be successfully implemented on high-end hardware. Only by cycling and distributing algorithms based on environmental conditions and through the exploitation of algorithmic strengths can high-quality real-time special effects and highly accurate calculations become as common as texture mapping. Furthermore, in a gaming context, players often spend an inordinate amount of time fine-tuning their graphics settings to achieve the perfect balance between rendering quality and frames-per-second performance. Using this system, however, ensures that performance vs. quality is always optimised, not only for the game as a whole but also for the current scene being rendered – some scenes might, for example, require more computational power than others, resulting in noticeable slowdowns, slowdowns not experienced thanks to our system’s dynamic cycling of rendering algorithms and its proof of concept unification of the CPU and GPU. / Thesis (PhD)--University of Pretoria, 2012. / Computer Science / unrestricted

Page generated in 0.0422 seconds