• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 9
  • 7
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 104
  • 32
  • 24
  • 21
  • 20
  • 18
  • 15
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A novel 3D recovery method by dynamic (de)focused projection / Nouvelle méthode de reconstruction 3D par projection dynamique (dé)focalisée

Lertrusdachakul, Intuoun 30 November 2011 (has links)
Ce mémoire présente une nouvelle méthode pour l’acquisition 3D basée sur la lumière structurée. Cette méthode unifie les techniques de depth from focus (DFF) et depth from defocus (DFD) en utilisant une projection dynamique (dé)focalisée. Avec cette approche, le système d’acquisition d’images est construit de manière à conserver la totalité de l’objet nette sur toutes les images. Ainsi, seuls les motifs projetés sont soumis aux déformations de défocalisation en fonction de la profondeur de l’objet. Quand les motifs projetés ne sont pas focalisés, leurs Point Spread Function (PSF) sont assimilées à une distribution gaussienne. La profondeur finale est calculée en utilisant la relation entre les PSF de différents niveaux de flous et les variations de la profondeur de l’objet. Notre nouvelle estimation de la profondeur peut être utilisée indépendamment. Elle ne souffre pas de problèmes d’occultation ou de mise en correspondance. De plus, elle gère les surfaces sans texture et semi-réfléchissante. Les résultats expérimentaux sur des objets réels démontrent l’efficacité de notre approche, qui offre une estimation de la profondeur fiable et un temps de calcul réduit. La méthode utilise moins d’images que les approches DFF et contrairement aux approches DFD, elle assure que le PSF est localement unique / This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object’s depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object’s depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
62

Automatic object detection and tracking for eye-tracking analysis

Cederin, Liv, Bremberg, Ulrika January 2023 (has links)
In recent years, eye-tracking technology has gained considerable attention, facilitating analysis of gaze behavior and human visual attention. However, eye-tracking analysis often requires manual annotation on the objects being gazed upon, making quantitative data analysis a difficult and time-consuming process. This thesis explores the area of object detection and object tracking applied on scene camera footage from mobile eye-tracking glasses. We have evaluated the performance of state-of-the-art object detectors and trackers, resulting in an automated pipeline specialized at detecting and tracking objects in scene videos. Motion blur constitutes a significant challenge in moving cameras, complicating tasks such as object detection and tracking. To address this, we explored two approaches. The first involved retraining object detection models on datasets with augmented motion-blurred images, while the second one involved preprocessing the video frames with deblurring techniques. The findings of our research contributes with insights into efficient approaches to optimally detect and track objects in scene camera footage from eye-tracking glasses. Out of the technologies we tested, we found that motion deblurring using DeblurGAN-v2, along with a DINO object detector combined with the StrongSORT tracker, achieved the highest accuracies. Furthermore, we present an annotated dataset consisting of frames from recordings with eye-tracking glasses, that can be utilized for evaluating object detection and tracking performance.
63

Saccadic eye movement measurements in the normal eye: Investigating the clinical value of a non-invasive eye movement monitoring apparatus.

Kavasakali, Maria January 2005 (has links)
Clinicians are becoming increasingly aware of the effect of various pathologieso n the characteristicso f saccadice ye movements.A s such, an efficient and non-invasivem eano f measuringe ye-movementisn a clinical environmenti s of interest to many. The aim of this thesis is to investigate the clinical application of a non-invasive eye movement recording technique as a part of a clinical examination. Eye movements were measured using an IRIS 6500 infrared limbal eye tracker, which we customized for the direct recording of oblique eye movements as well as horizontal and vertical. Firstly, the eye-tracker itself was assessed. Visually normal observers made saccadic eye movements to a 10' stimulus in eight directions of gaze. Primary (ANOVA) and secondary analyses (mean error less than 5%) resulted in acceptance that averaging four measurements would give a representative measurement of saccadic latency, peak velocity, amplitude and duration. Test-retest results indicated that this technique gives statistically (± 1.96*STDEVDifference) repeatable responses. Severalf actors that could potentially influence clinically basedm easureso f eye-movementsw ere examined. These included, the effect of ageing, viewing distances, dioptric blur and cataract. The results showed that saccadic latency and durationa re significantly (p< 0.05) longer in older (60-89 years)o bserversc ompared to younger (20-39 years). Peak velocity and amplitude were not significantly affectedb y the age of the observer.A ll saccadicp arameters( SP) were significantly affected by direction (Chapter 5). The compact nature of this eye movement methodology is obtainable since there is no significant effect on viewing distance (300 cm vs. 49 cm) (Chapter 6). There is also no significant effect of dioptric blur (up to +LOODS) on any of the four SP. In contrast, a higher level of defocus (+3.O ODS)h as a larger probability of interfering with the measurementso f peak velocity and duration (Chapter 7). Saccadice ye-movementsw ere also recorded whilst normally sighted subjects wore cataract simulation goggles. The results suggested that the presence of dense cataract introduces significant increases in saccadic latencies and durations. No effect was found on the peak velocities and amplitudes.T he effect of amblyopiao n SP was also investigatedin order to examine if this methodologyi s able to detectn ormal from abnormalr esponses(i . e. increased saccadicla tencies).T his set of data (Chapter9 ) showedt hat using IRIS 6500, longer than normal latencies may be recorded from the amblyopic eye but no consistent effect was found for the other SP (peak velocity, amplitude, duration). overall, the results of this thesis demonstrateth at the IRIS 6500 eye-tracker has many desirable elements (it is non-invasive; comfortable for the observers and gives repeatable and precise results in an acceptable time) that would potentially make it a useful clinical tool as a part of a routine examination.
64

Visual Performance in Pseudophakia. The Effect of Meridional Blur in Pseudoaccommodation.

Serra, Pedro M.F.N. January 2013 (has links)
The main aim of this thesis is to evaluate the effect of meridional blur, using refractive induced astigmatism, on visual performance at far and close distances. Visual performance was evaluated using letter discrimination tasks at distance and near (visual acuity, VA) and a reading task at near on subjects with pharmacologically blocked (young) or absent accommodation (presbyopic and pseudophakic). The effect of astigmatism was tested using positive cylindrical lenses oriented at 180 and 90 degrees, these simulating with- (WTR) and against-the-rule (ATR) astigmatism. Other refractive status were also evaluated, namely, in-focus and spherical defocus. The visual performance data were correlated with biometric measurements (pupil size, anterior chamber depth (ACD), corneal and ocular aberrations, corneal multifocality, patient age, axial length). Further, the functionality of meridional blur was evaluated for alphabets in addition to the standard Roman alphabet using a VA task. The results confirm that myopic astigmatism contributes to a better visual performance at closer distances, with ATR astigmatism providing higher performance for reading tasks compared to other forms of astigmatism. Anatomical factors such as pupil size, corneal multifocality and ACD were significantly correlated visual performance, while other ocular characteristics were not. Ray tracing modelling using wavefront data was a moderate predictor of VA and reading acuity. The results of the effect of meridional blur orientation on alphabets other than the Roman alphabet, suggest that visual performance is dependent on the interaction between blur orientation and letter¿s spatial characteristics. In conclusion, pseudoaccommodation is a multifactorial phenomenon with pupil size being the major contributor for the improvement in visual performance. Against-the-rule shows advantages over WTR astigmatism, by providing higher reading performance, however extending the present and previous findings for clinical application will require further investigation on the effect of meridional blur in common and socio-culturally adapted tasks. / Bradford School of Optometry and Vision Sciences
65

On-board image quality assessment for a satellite

Marais, Izak van Zyl 03 1900 (has links)
Thesis (PhD (Electronic Engineering))--University of Stellenbosch, 2009. / The downloading of images is a bottleneck in the image acquisition chain for low earth orbit, remote sensing satellites. An on-board image quality assessment system could optimise use of available downlink time by prioritising images for download, based on their quality. An image quality assessment system based on measuring image degradations is proposed. Algorithms for estimating degradations are investigated. The degradation types considered are cloud cover, additive sensor noise and the defocus extent of the telescope. For cloud detection, the novel application of heteroscedastic discriminant analysis resulted in better performance than comparable dimension reducing transforms from remote sensing literature. A region growing method, which was previously used on-board a micro-satellite for cloud cover estimation, is critically evaluated and compared to commonly used thresholding. The thresholding method is recommended. A remote sensing noise estimation algorithm is compared to a noise estimation algorithm based on image pyramids. The image pyramid algorithm is recommended. It is adapted, which results in smaller errors. A novel angular spectral smoothing method for increasing the robustness of spectral based, direct defocus estimation is introduced. Three existing spectral based defocus estimation methods are compared with the angular smoothing method. An image quality assessment model is developed that models the mapping of the three estimated degradation levels to one quality score. A subjective image quality evaluation experiment is conducted, during which more than 18000 independent human judgements are collected. Two quality assessment models, based on neural networks and splines, are tted to this data. The spline model is recommended. The integrated system is evaluated and image quality predictions are shown to correlate well with human quality perception.
66

Flou de mouvement réaliste en temps réel

Guertin-Renaud, Jean-Philippe 08 1900 (has links)
Le flou de mouvement de haute qualité est un effet de plus en plus important en rendu interactif. Avec l'augmentation constante en qualité des ressources et en fidélité des scènes vient un désir semblable pour des effets lenticulaires plus détaillés et réalistes. Cependant, même dans le contexte du rendu hors-ligne, le flou de mouvement est souvent approximé à l'aide d'un post-traitement. Les algorithmes de post-traitement pour le flou de mouvement ont fait des pas de géant au niveau de la qualité visuelle, générant des résultats plausibles tout en conservant un niveau de performance interactif. Néanmoins, des artefacts persistent en présence, par exemple, de mouvements superposés ou de motifs de mouvement à très large ou très fine échelle, ainsi qu'en présence de mouvement à la fois linéaire et rotationnel. De plus, des mouvements d'amplitude importante ont tendance à causer des artefacts évidents aux bordures d'objets ou d'image. Ce mémoire présente une technique qui résout ces artefacts avec un échantillonnage plus robuste et un système de filtrage qui échantillonne selon deux directions qui sont dynamiquement et automatiquement sélectionnées pour donner l'image la plus précise possible. Ces modifications entraînent un coût en performance somme toute mineur comparativement aux implantations existantes: nous pouvons générer un flou de mouvement plausible et temporellement cohérent pour plusieurs séquences d'animation complexes, le tout en moins de 2ms à une résolution de 1280 x 720. De plus, notre filtre est conçu pour s'intégrer facilement avec des filtres post-traitement d'anticrénelage. / High-quality motion blur is an increasingly important effect in interactive graphics. With the continuous increase in asset quality and scene fidelity comes a similar desire for more detailed and realistic lenticular effects. However, even in the context of offline rendering, motion blur is often approximated as a post process. Recent motion blur post-processes have made great leaps in visual quality, generating plausible results with interactive performance. Still, distracting artifacts remain in the presence of, for instance, overlapping motion or large- and fine-scale motion features, as well as in the presence of both linear and rotational motion. Furthermore, large scale motion often tends to cause obvious artifacts at boundary points. This thesis addresses these artifacts with a more robust sampling and filtering scheme that samples across two directions which are dynamically and automatically selected to provide the most accurate image possible. These modifications come at a very slight penalty compared to previous motion blur implementations: we render plausible, temporally-coherent motion blur on several complex animation sequences, all in under 2ms at a resolution of 1280 x 720. Moreover, our filter is designed to integrate seamlessly with post-process anti-aliasing.
67

Funnel Vision

Grainger, David 01 January 2008 (has links)
This paper will talk about the videos and sculptural installation in my thesis exhibition. Shooting videos outside of the studio developed into a project overarching any individual video or its particular signs. Thus, this paper will focus on the video project with examples that follow a timeline of development, rather than the actual 6 videos on display in the exhibit. The two-part sculpture "Deer in the Headlights" is created in the context of these videos, and coexists with them in a specific architectural space. This space, as well as the clichéd meaning of the deer's gaze, have a relation to the title of the show.
68

Untersuchung der Auflösungsgrenzen eines Variablen Formstrahlelektronenschreibers mit Hilfe chemisch verstärkter und nicht verstärkter Negativlacke

Steidel, Katja 01 April 2011 (has links) (PDF)
Ziele wie eine hohe Auflösung und ein hoher Durchsatz sind bisher in der Elektronenstrahllithografie nicht gleichzeitig erreichbar; es existieren daher die Belichtungskonzepte Gaussian-Beam und Variable-Shaped-Beam (VSB), die auf Hochauflösung respektive Durchsatz optimiert sind. In dieser Arbeit wird der experimentelle Kreuzvergleich beider Belichtungskonzepte mit Hilfe chemisch verstärkter und nicht verstärkter Lacksysteme präsentiert. Als quantitativer Parameter wurde die Gesamtunschärfe eingeführt, die sich durch quadratische Addition der auflösungslimitierenden Fehlerquellen, also Coulomb-Wechselwirkungen (Strahlunschärfe), Lackprozess (Prozessunschärfe) und Proximity-Effekt (Streuunschärfe), ergibt. Für den Vergleich wurden wohldefinierte Prozesse auf 300 mm Wafern entwickelt und umfassend charakterisiert. Weitere Grundlage ist die Anpassung oder Neuentwicklung spezieller Methoden wie Kontrast- und Basedosebestimmung, Doughnut-Test, Isofokal-Dosis-Methode für Linienbreiten und Linienrauheit sowie die Bestimmung der Gesamtunschärfe unter Variation des Fokus. Es wird demonstriert, dass sich mit einer kleineren Gesamtunschärfe die Auflösung dichter Linien verbessert. Der direkte Vergleich der Gesamtunschärfen beider Belichtungskonzepte wird durch die variable Strahlunschärfe bei VSB-Schreibern erschwert. Da für die Bestimmung der Gesamtunschärfe keine Hochauflösung nötig ist, wird das Testpattern mit größeren Shots belichtet und induziert somit eine größere Gesamtunschärfe. Es wird gezeigt, dass die Prozessunschärfe den größten Anteil der Gesamtunschärfe stellt. Außerdem spielt die Streuunschärfe bei Lackdicken kleiner 100 nm und Beschleunigungsspannungen von 50 kV oder größer keine Rolle. / Up to now, targets like high resolution and high throughput can not be achieved at the same time in electron beam lithography; therefore, the exposure concepts Gaussian-Beam and Variable-Shaped-Beam (VSB) exist, which are optimized for high resolution and throughput, respectively. In this work, the experimental cross-comparison of both exposure concepts is presented using chemically amplified and non-chemically amplified resist systems. For quantification the total blur parameter has been introduced, which is the result of the quadratic addition of the resolution limiting error sources, like Coulomb interactions (beam blur), resist process (process blur) and proximity-effect (scatter blur). For the comparison, well-defined processes have been developed on 300 mm wafers and were fully characterized. Further basis is the adaption or the new development of special methods like the determination of contrast and basedose, the doughnut-test, the isofocal-dose-method for line widths and line roughness as well as the determination of the total blur with variation of the focus. It is demonstrated, that the resolution of dense lines is improved with a smaller total blur. The direct comparison of the total blur values of both exposure concepts is complicated by the variable beam blur of VSB writers. Since high resolution is not needed for the determination of the total blur, the test pattern is exposed with larger shots on the VSB writer, which induces a larger total blur. It is shown that the process blur makes the largest fraction of the total blur. The scatter blur is irrelevant using resist thicknesses smaller than 100 nm and acceleration voltages of 50 kV or larger.
69

Saccadic eye movement measurements in the normal eye : investigating the clinical value of a non-invasive eye movement monitoring apparatus

Kavasakali, Maria January 2005 (has links)
Clinicians are becoming increasingly aware of the effect of various pathologies on the characteristics of saccadic eye movements. As such, an efficient and non-invasive means of measuring eye-movement in a clinical environment is of interest to many. The aim of this thesis is to investigate the clinical application of a non-invasive eye movement recording technique as a part of a clinical examination. Eye movements were measured using an IRIS 6500 infrared limbal eye tracker, which we customized for the direct recording of oblique eye movements as well as horizontal and vertical. Firstly, the eye-tracker itself was assessed. Visually normal observers made saccadic eye movements to a 10' stimulus in eight directions of gaze. Primary (ANOVA) and secondary analyses (mean error less than 5%) resulted in acceptance that averaging four measurements would give a representative measurement of saccadic latency, peak velocity, amplitude and duration. Test-retest results indicated that this technique gives statistically (± 1.96*STDEVDifference) repeatable responses. Several factors that could potentially influence clinically based measures of eye-movements were examined. These included, the effect of ageing, viewing distances, dioptric blur and cataract. The results showed that saccadic latency and duration are significantly (p < 0.05) longer in older (60-89 years) observers compared to younger (20-39 years). Peak velocity and amplitude were not significantly affected by the age of the observer. All saccadic parameters (SP) were significantly affected by direction (Chapter 5). The compact nature of this eye movement methodology is obtainable since there is no significant effect on viewing distance (300 cm vs. 49 cm) (Chapter 6). There is also no significant effect of dioptric blur (up to +LOODS) on any of the four SP. In contrast, a higher level of defocus (+3.O ODS) has a larger probability of interfering with the measurements of peak velocity and duration (Chapter 7). Saccadic eye-movements were also recorded whilst normally sighted subjects wore cataract simulation goggles. The results suggested that the presence of dense cataract introduces significant increases in saccadic latencies and durations. No effect was found on the peak velocities and amplitudes. The effect of amblyopia on SP was also investigated in order to examine if this methodology is able to detect normal from abnormal responses (i.e. increased saccadic latencies). This set of data (Chapter9 ) showed that using IRIS 6500, longer than normal latencies may be recorded from the amblyopic eye but no consistent effect was found for the other SP (peak velocity, amplitude, duration). Overall, the results of this thesis demonstrate that the IRIS 6500 eye-tracker has many desirable elements (it is non-invasive; comfortable for the observers and gives repeatable and precise results in an acceptable time) that would potentially make it a useful clinical tool as a part of a routine examination.
70

Mathematical theory of the Flutter Shutter : its paradoxes and their solution

Tendero, Yohann 22 June 2012 (has links) (PDF)
This thesis provides theoretical and practical solutions to two problems raised by digital photography of moving scenes, and infrared photography. Until recently photographing moving objects could only be done using short exposure times. Yet, two recent groundbreaking works have proposed two new designs of camera allowing arbitrary exposure times. The flutter shutter of Agrawal et al. creates an invertible motion blur by using a clever shutter technique to interrupt the photon flux during the exposure time according to a well chosen binary sequence. The motion-invariant photography of Levin et al. gets the same result by accelerating the camera at a constant rate. Both methods follow computational photography as a new paradigm. The conception of cameras is rethought to include sophisticated digital processing. This thesis proposes a method for evaluating the image quality of these new cameras. The leitmotiv of the analysis is the SNR (signal to noise ratio) of the image after deconvolution. It gives the efficiency of these new camera design in terms of image quality. The theory provides explicit formulas for the SNR. It raises two paradoxes of these cameras, and resolves them. It provides the underlying motion model of each flutter shutter, including patented ones. A shorter second part addresses the the main quality problem in infrared video imaging, the non-uniformity. This perturbation is a time-dependent noise caused by the infrared sensor, structured in columns. The conclusion of this work is that it is not only possible but also efficient and robust to perform the correction on a single image. This permits to ensure the absence of ''ghost artifacts'', a classic of the literature on the subject, coming from inadequate processing relative to the acquisition model.

Page generated in 0.0368 seconds