• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 9
  • 7
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 104
  • 32
  • 24
  • 21
  • 20
  • 18
  • 15
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Le flou dans le cinéma des années 1990-2010 : une image-symptôme / Blur in the 1990s to 2010s cinema : a symptom-image

Vally, Hélène 10 November 2015 (has links)
Depuis les années 1990, le flou a envahi les écrans, touchant toutes les productions audiovisuelles, qu’ils s’agissent des films d’auteur, des blockbusters, des séries, ou encore des publicités. S’il est le plus souvent devenu un composant narratif, voire purement « cosmétique », des longs-métrages, certains réalisateurs tentent de se dégager de cet usage normé afin de jouer avec son potentiel menaçant, que nous savons qualifié de symptomal. Le terme de « symptôme », qui se réfère à l’usage qu’en fait Georges Didi-Huberman, renvoie à un contre-régime de l’image qui vient soulever le régime figuratif. En privilégiant une approche esthétique, phénoménologique et sémiologique, notre thèse met au jour la puissance symptomale du flou, à savoir son potentiel figural, haptique ou encore informe. Selon la théorie développée par Georges Didi-Huberman à partir des écrits de Sigmund Freud, le terme de « symptôme »fait également écho au mouvement du temps, c’est-à-dire au travail qu’opèrent les hantises, les survivances au sein des images afin d’ébranler, de faire vaciller le régime figuratif. Le flou-symptôme est donc aussi envisagé à partir de l’angle du temps, d’un temps qui malmène la représentation, la narration. Notre attention se porte sur différents types de flou, qu’il enveloppe totalement ou partiellement l’image(faible profondeur de champ), s’immisce progressivement dans l’image (mouvement de mise au point), ou encore résulte d’un travail sur le montage rapide (flicker) et la matière (définition de l’image, flou atmosphérique). Dans ce travail, nous analysons toutes ces différentes formes de flou afin de révéler que si certains réalisateurs s’emploient à défaire la fiction première, c’est pour mieux esquisser de nouvelles lignes de fuite narratives, des vies nouvelles. / Since the 90s, blur has invaded screens through all audiovisual productions. This phenomenon can be found not only in arthouse movies but also in blockbusters, TV series, or commercials. Its recurrent appearance as a narrative component, or even “cosmetic”, led some directors to grow away from this standardized use and to fully develop its threatening potential, that we call here symptomal. Symptom,according to Georges Didi-Huberman's works, refers to a counter-regime in which the image upraises the figurative regime. By favoring an aesthetic, phenomenological, and semiologistic approach, this thesisreveals the symptomal power of the blur, namely its figural, haptic, and formless potential. According to the theory that Georges Didi-Huberman elaborated from Sigmund Freud's work, the term “symptom”resonates with the flow of time, id est the labor operated by hauntings and survivances within images in order to undermine and shake the figurative regime. Thus the blur-symptom is also envisioned through the scope of time, a time which destabilizes representation and narration. Our focus is on the different types of blur : an image completely or partially blurred (shallow depth of field), blurring through a motion in the focus (focus-through), or through flickers and even a processing of the matter (image definition andatmospheric blur). In this study, we analyze these different forms of blur to reveal that some directors strive to dismantle the original fiction. Doing so, they eventually sketch new narrative lines of flight, newlives.
72

Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring / Contributions pour la restauration d'images : des stratégies d'optimisation numérique à la déconvolution aveugle et à la correction de flous spatialement variables

Mourya, Rahul Kumar 01 February 2016 (has links)
L’introduction de dégradations lors du processus de formation d’images est un phénomène inévitable: les images souffrent de flou et de la présence de bruit. Avec les progrès technologiques et les outils numériques, ces dégradations peuvent être compensées jusqu’à un certain point. Cependant, la qualité des images acquises est insuffisante pour de nombreuses applications. Cette thèse contribue au domaine de la restauration d’images. La thèse est divisée en cinq chapitres, chacun incluant une discussion détaillée sur différents aspects de la restauration d’images. La thèse commence par une présentation générale des systèmes d’imagerie et pointe les dégradations qui peuvent survenir ainsi que leurs origines. Dans certains cas, le flou peut être considéré stationnaire dans tout le champ de vue et est alors simplement modélisé par un produit de convolution. Néanmoins, dans de nombreux cas de figure, le flou est spatialement variable et sa modélisation est plus difficile, un compromis devant être réalisé entre la précision de modélisation et la complexité calculatoire. La première partie de la thèse présente une discussion détaillée sur la modélisation des flous spatialement variables et différentes approximations efficaces permettant de les simuler. Elle décrit ensuite un modèle de formation de l’image générique. Puis, la thèse montre que la restauration d’images peut s’interpréter comme un problème d’inférence bayésienne et ainsi être reformulé en un problème d’optimisation en grande dimension. La deuxième partie de la thèse considère alors la résolution de problèmes d’optimisation génériques, en grande dimension, tels que rencontrés dans de nombreux domaines applicatifs. Une nouvelle classe de méthodes d’optimisation est proposée pour la résolution des problèmes inverses en imagerie. Les algorithmes proposés sont aussi rapides que l’état de l’art (d’après plusieurs comparaisons expérimentales) tout en supprimant la difficulté du réglage de paramètres propres à l’algorithme d’optimisation, ce qui est particulièrement utile pour les utilisateurs. La troisième partie de la thèse traite du problème de la déconvolution aveugle (estimation conjointe d’un flou invariant et d’une image plus nette) et suggère différentes façons de contraindre ce problème d’estimation. Une méthode de déconvolution aveugle adaptée à la restauration d’images astronomiques est développée. Elle se base sur une décomposition de l’image en sources ponctuelles et sources étendues et alterne des étapes de restauration de l’image et d’estimation du flou. Les résultats obtenus en simulation suggèrent que la méthode peut être un bon point de départ pour le développement de traitements dédiés à l’astronomie. La dernière partie de la thèse étend les modèles de flous spatialement variables pour leur mise en oeuvre pratique. Une méthode d’estimation du flou est proposée dans une étape d’étalonnage. Elle est appliquée à un système expérimental, démontrant qu’il est possible d’imposer des contraintes de régularité et d’invariance lors de l’estimation du flou. L’inversion du flou estimé permet ensuite d’améliorer significativement la qualité des images. Les deux étapes d’estimation du flou et de restauration forment les deux briques indispensables pour mettre en oeuvre, à l’avenir, une méthode de restauration aveugle (c’est à dire, sans étalonnage préalable). La thèse se termine par une conclusion ouvrant des perspectives qui pourront être abordées lors de travaux futurs / Degradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
73

Detekce a hodnocení zkreslených snímků v obrazových sekvencích / Detection and evaluation of distorted frames in retinal image data

Vašíčková, Zuzana January 2020 (has links)
Diplomová práca sa zaoberá detekciou a hodnotením skreslených snímok v retinálnych obrazových dátach. Teoretická časť obsahuje stručné zhrnutie anatómie oka a metód hodnotenia kvality obrazov všeobecne, ako aj konkrétne hodnotenie retinálnych obrazov. Praktická časť bola vypracovaná v programovacom jazyku Python. Obsahuje predspracovanie dostupných retinálnych obrazov za účelom vytvorenia vhodného datasetu. Ďalej je navrhnutá metóda hodnotenia troch typov šumu v skreslených retinálnych obrazoch, presnejšie pomocou Inception-ResNet-v2 modelu. Táto metóda nebola prijateľná a navrhnutá bola teda iná metóda pozostávajúca z dvoch krokov - klasifikácie typu šumu a následného hodnotenia úrovne daného šumu. Pre klasifikáciu typu šumu bolo využité filtrované Fourierove spektrum a na hodnotenie obrazu boli využité príznaky extrahované pomocou ResNet50, ktoré vstupovali do regresného modelu. Táto metóda bola ďalej rozšírená ešte o krok detekcie zašumených snímok v retinálnych sekvenciách.
74

Trasování významných bodů ve videosekvenci nestacionární kamery / Interest Points Tracking in Video Sequence of Non-stationary Camera

Studený, Pavel January 2016 (has links)
The thesis deals with the issue of tracking feature points earned from videosequences of hand helded camera. The work is focused on the case of moving camera and static background, and events that are associated with this case and can occur. There is studied the movement of the camera, which is given its direction and speed. The aim of this work is the election and the subsequent implementation of three fundamentally different methods suitable for tracking feature points in case of moving camera and their comparison according to set criteria. On the basis of comparison will be under pre-defined conditions chosen algorithm that is best able to deal with tracing these points.
75

Slepá dekonvoluce obrazů kalibračních vzorků z elektronového mikroskopu / Blind Image Deconvolution of Electron Microscopy Images

Schlorová, Hana January 2017 (has links)
V posledních letech se metody slepé dekonvoluce rozšířily do celé řady technických a vědních oborů zejména, když nejsou již limitovány výpočetně. Techniky zpracování signálu založené na slepé dekonvoluci slibují možnosti zlepšení kvality výsledků dosažených zobrazením pomocí elektronového mikroskopu. Hlavním úkolem této práce je formulování problému slepé dekonvoluce obrazů z elektronového mikroskopu a hledání vhodného řešení s jeho následnou implementací a porovnáním s dostupnou funkcí Matlab Image Processing Toolboxu. Úplným cílem je tedy vytvoření algoritmu korigujícícho vady vzniklé v procesu zobrazení v programovém prostředí Matlabu. Navržený přístup je založen na regularizačních technikách slepé dekonvoluce.
76

Simulation of Optical Aberrations for Comet Interceptor’s OPIC Instrument

Bührer, Maximilian January 2020 (has links)
In space exploration optical imaging is one of the key measurements conducted, with a vast majority of missions heavily relying on optical data acquisition to examine alien worlds. One such endeavor is ESA’s F-class mission Comet Interceptor, a multi-element spacecraft expected to be launched in 2028. It consists of a primary platform and two sub-spacecraft, one of which carrying the Optical Periscopic Imager for Comets (OPIC). An accurate prediction of the generated imagery is of undeniable importance as mission planning and instrument design strongly depend on the real-world output quality of the camera system. In the case of OPIC, the collected image data will be used to reconstruct three dimensional models of targeted celestial bodies. Furthermore, the sub-spacecraft faces a risk of high velocity dust impacts, leading to a limited number of data samples to be broadcasted back to the primary spacecraft before collision. Testing image prioritization algorithms and reconstruction methods prior to mission start requires accurate computer-generated images. Camera sensors and lens systems are subjected to various optical distortions and aberrations that degrade the final image. Popular render engines model those effects to a certain degree only and as a result produce content that is looking too perfect. While more sophisticated software products exist, they often come with compatibility limitations and other drawbacks. This report discusses the most important optical aberrations, as well as their relevance for optical instruments in space applications with a particular focus on the Comet Interceptor mission. The main part of this work is however the implementation of a dedicated software tool that simulates a variety of optical aberrations complementing the basic camera model of the Blender render engine. While its functionality is mostly demonstrated for OPIC, the software is designed with a broad range of usage scenarios in mind.
77

Blur Image Processing

Zhang, Yi January 2015 (has links)
No description available.
78

Controllable 3D Effects Synthesis in Image Editing

Yichen Sheng (18184378) 15 April 2024 (has links)
<p dir="ltr">3D effect synthesis is crucial in image editing to enhance realism or visual appeal. Unlike classical graphics rendering, which relies on complete 3D geometries, 3D effect synthesis in im- age editing operates solely with 2D images as inputs. This shift presents significant challenges, primarily addressed by data-driven methods that learn to synthesize 3D effects in an end-to-end manner. However, these methods face limitations in the diversity of 3D effects they can produce and lack user control. For instance, existing shadow generation networks are restricted to produc- ing hard shadows without offering any user input for customization.</p><p dir="ltr">In this dissertation, we tackle the research question: <i>how can we synthesize controllable and realistic 3D effects in image editing when only 2D information is available? </i>Our investigation leads to four contributions. First, we introduce a neural network designed to create realistic soft shadows from an image cutout and a user-specified environmental light map. This approach is the first attempt in utilizing neural network for realistic soft shadow rendering in real-time. Second, we develop a novel 2.5D representation Pixel Height, tailored for the nuances of image editing. This representation not only forms the foundation of a new soft shadow rendering pipeline that provides intuitive user control, but also generalizes the soft shadow receivers to be general shadow receivers. Third, we present the mathematical relationship between the Pixel Height representation and 3D space. This connection facilitates the reconstruction of normals or depth from 2D scenes, broadening the scope for synthesizing comprehensive 3D lighting effects such as reflections and refractions. A 3D-aware buffer channels are also proposed to improve the synthesized soft shadow quality. Lastly, we introduce Dr.Bokeh, a differentiable bokeh renderer that extends traditional bokeh effect algorithms with better occlusion modeling to correct flaws existed in existing methods. With the more precise lens modeling, we show that Dr.Bokeh not only achieves the state-of-the-art bokeh rendering quality, but also pushes the boundary of depth-from-defocus problem.</p><p dir="ltr">Our work in controllable 3D effect synthesis represents a pioneering effort in image editing, laying the groundwork for future lighting effect synthesis in various image editing applications. Moreover, the improvements to filtering-based bokeh rendering could significantly enhance com- mercial products, such as the portrait mode feature on smartphones.</p>
79

Optimal edge filters explain human blur detection

McIlhagga, William H., May, K.A. January 2012 (has links)
No / Edges are important visual features, providing many cues to the three-dimensional structure of the world. One of these cues is edge blur. Sharp edges tend to be caused by object boundaries, while blurred edges indicate shadows, surface curvature, or defocus due to relative depth. Edge blur also drives accommodation and may be implicated in the correct development of the eye's optical power. Here we use classification image techniques to reveal the mechanisms underlying blur detection in human vision. Observers were shown a sharp and a blurred edge in white noise and had to identify the blurred edge. The resultant smoothed classification image derived from these experiments was similar to a derivative of a Gaussian filter. We also fitted a number of edge detection models (MIRAGE, N(1), and N(3)(+)) and the ideal observer to observer responses, but none performed as well as the classification image. However, observer responses were well fitted by a recently developed optimal edge detector model, coupled with a Bayesian prior on the expected blurs in the stimulus. This model outperformed the classification image when performance was measured by the Akaike Information Criterion. This result strongly suggests that humans use optimal edge detection filters to detect edges and encode their blur.
80

The effects of monocular refractive blur on gait parameters when negotiating a raised surface

Vale, Anna, Scally, Andy J., Buckley, John, Elliott, David January 2008 (has links)
No / Falls in the elderly are a major cause of mortality and morbidity. Elderly people with visual impairment have been found to be at increased risk of falling, with poor visual acuity in one eye causing greater risk than poor binocular visual acuity. The present study investigated whether monocular refractive blur, at a level typically used for monovision correction, would significantly reduce stereoacuity and consequently affect gait parameters when negotiating a raised surface. Fourteen healthy subjects (25.8 +/- 5.6 years) walked up to and on to a raised surface, under four visual conditions; binocular, +2DS blur over their non-dominant eye, +2DS blur over their dominant eye and with their dominant eye occluded. Analysis focussed on foot positioning and toe clearance parameters. Monocular blur had no effect on binocular acuity, but caused a small decline in binocular contrast sensitivity and a large decline in stereoacuity (p < 0.01). Vertical toe clearance increased under monocular blur or occlusion (p < 0.01) with a significantly greater increase under blur of the dominant eye compared with blur of the non-dominant eye (p < 0.01). Increase in toe clearance was facilitated by increasing maximum toe elevation (p < 0.01). Findings indicate that monocular blur at a level typically used for monovision correction significantly reduced stereoacuity and consequently the ability to accurately perceive the height and position of a raised surface placed within the travel path. These findings may help explain why elderly individuals with poor visual acuity in one eye have been found to have an increased risk of falling.

Page generated in 0.1193 seconds