1 |
Atmospheric correction for the visible and near-infrared channels of ATSR-2Flowerdew, Roland John January 1995 (has links)
No description available.
|
2 |
Magnetic resonance image distortions due to artificial macroscopic objects:an example: correction of image distortion caused by an artificial hip prosthesisKoivula, A. (Antero) 27 November 2002 (has links)
Abstract
Eddy currents and susceptibility differences are the most important
sources that interfere with the quality of MR images in the presence of an
artificial macroscopic object in the volume to be imaged. In this study,
both of these factors have been examined.
The findings show that the RF field is the most important cause of
induced eddy currents when gradients with relatively slow slew rates are
used. The induced eddy currents amplify or dampen the RF field with the
result that the flip angle changes. At the proximal end in the vicinity of
the hip prosthesis surface, there have been areas where the flip angle is
nearly threefold compared to the reference flip angle. Areas with
decreased flip angles have also been found near the surface of the
prosthesis top. The incompleteness of the image due to eddy currents
manifests as signal loss areas.
Two different methods based on MRI were developed to estimate the
susceptibility of a cylindrical object. One of them is based on
geometrical distortions in SE magnitude images, while the other takes
advantage of phase differences in GRE phase images. The estimate value of
the Profile™ test hip prosthesis is χ = (170 ± 13)
10-6.
A remapping method was selected to correct susceptibility image
distortions. Correction was accomplished with pixel shifts in the
frequency domain. The magnetic field distortions were measured using GRE
phase images. The method was tested by simulations and by imaging a hip
prosthesis in a water tank and in a human pelvis. The main limitations of
the method described here are the loss of a single-valued correction map
with higher susceptibility differences and the problems with phase
unwrapping in phase images. Modulation transfer functions (MTF) were
exploited to assess the effect of correction procedure. The corrected
image of a prosthesis in a human hip after total hip arthroplasty appears
to be equally sharp or slightly sharper than the corresponding original
images.
The computer programs written for this study are presented in an
appendix.
|
3 |
Statistical analysis methods for time varying nanoscale imaging problemsLaitenberger, Oskar 29 June 2018 (has links)
No description available.
|
4 |
Performance of a Micro-CT System : Characterisation of Hamamatsu X-ray source L10951-04 and flat panel C7942CA-22 / Prestanda hos ett Micro-CT System : Karaktärisering av Hamamatsu röntgenkälla L10951-04 och plattpanel C7942CA-22Baumann, Michael January 2014 (has links)
This master thesis evaluated the performance of a micro-CT system consisting of Hamamatsu microfocus X-ray source L10951-04 and CMOS flat panel C7942CA-22. The X-ray source and flat panel have been characterised in terms of dark current, image noise and beam profile. Additionally, the micro-CT system’s spatial resolution, detector lag and detector X-ray response have been measured. Guidance for full image correction and methods for characterisation and performance test of the X-ray source and detector is presented. A spatial resolution of 7 lp/mm at 10 % MTF was measured. A detector lag of 0.3 % was observed after ten minutes of radiation exposure. The performance of the micro-CT system was found to be sufficient for high resolution X-ray imaging. However, the detector lag effect is strong enough to reduce image quality during subsequent image acquisition and must either be avoided or corrected for.
|
5 |
BrandGAN: Unsupervised Structural Image CorrectionEl Katerji, Mostafa 12 May 2021 (has links)
Recently, machine learning models such as Generative Adversarial Networks and Autoencoders have received significant attention from the research community. In fact, researchers have produced novel ways for using this technology in the space of image manipulation for cross-domain image-to-image transformations, upsampling, style imprinting, human facial editing, and computed tomography correction. Previous work primarily focuses on transformations where the output inherits the same skeletal outline as the input image.
This work proposes a novel framework, called BrandGAN, that tackles image correction for hand-drawn images. One of this problem’s novelties is that it requires the skeletal outline of the input image to be manipulated and adjusted to look more like a target reference while retaining key visual features that were included intentionally by its creator.
GANs, when trained on a dataset, are capable of producing a large variety of novel images derived from a combination of visual features from the original dataset. StyleGAN is a model that iterated on the concept of GANs and was able to produce high-fidelity images such as human faces and cars. StyleGAN includes a process called projection that finds an encoding of an input image capable of producing a visually similar image. Projection in StyleGAN demonstrated the model’s ability to represent real images that were not a part of its training dataset. StyleGAN encodings are vectors that represent features of an image. Encodings can be combined to merge or manipulate features of distinct images.
In BrandGAN, we tackle image correction by leveraging StyleGAN’s projection and encoding vector feature manipulation. We present a modified version of projection to find an encoding representation of hand-drawn images. We propose a novel GAN indexing technique, called GANdex, capable of finding encodings of novel images derived from the original dataset that share visual similarities with the input image. Finally, with vector feature manipulation, we combine the GANdex vector’s features with the input image’s projection to produce the final image-corrected output. Combining the vectors results in adjusting the input imperfections to resemble the original dataset’s structure while retaining novel features from the raw input image. We evaluate seventy-five hand-drawn images collected through a study with fifteen participants using objective and subjective measures. BrandGAN reduced the Fréchet inception distance from 193 to 161 and the Kernel-Inception distance from 0.048 to 0.026 when comparing the hand-drawn and BrandGAN output images to the reference design dataset. A blinded experiment showed that the average participant could identify 4.33 out of 5 images as their own when presented with a visually similar control image. We included a survey that collected opinion scores ranging from one or “strongly disagree” to five or “strongly agree.” The average participant answered 4.32 for the retention of detail, 4.25 for the output’s professionalism, and 4.57 for their preference of using the BrandGAN output over their own.
|
6 |
Assessment of Remotely Sensed Image Processing Techniques for Unmanned Aerial System (Uas) ApplicationsZarzar, Christopher Michael 11 August 2017 (has links)
Unmanned Aerial Systems (UASs) offer a new era of local-scale environmental monitoring where access to invaluable aerial data no longer comes at a substantial cost. This provides the opportunity to vastly expand the ability to detect natural hazards impacts, observe environmental conditions, quantify restoration efforts, track species propagation, monitor land surface changes, cross-validate existing platforms, and identify hazardous situations. While UASs have the potential to accelerate understanding of natural processes, much of the research using UASs has applied current remote sensing image processing techniques without questioning the validity of these in UAS applications. With new scientific tools comes a need to affirm that previous techniques are still valid for the new systems. To this end, the objective of the current study is to provide an assessment regarding the use of current remote sensing image processing techniques in UAS applications. The research reported herein finds that atmospheric effects have a statistically significant impact on low altitude UAS imagery. Correcting for these external factors affecting the imagery was successful using an empirical line calibration (ELC) image correction technique and required little modification for use in a complex UAS application. Finally, it was found that classification performance of UAS imagery was reliant on training sample size more than classification technique, and that training sample size requirements are larger than previous remote sensing studies suggest.
|
7 |
Performance of the BRITE Prototype Photometer Under Real Sky ConditionsBode, Willem January 2011 (has links)
Wide-field photometry is prone to various degradations, such as atmospheric ex- tinction, varying point spread functions, and aliasing in addition to classical noise sources such as photon, sky background, readout, and thermal noise. While space- borne observations do not suer from atmospheric eects, varying star images over a large sensor and aliasing may seriously impede good results. A measure of the achievable precision of ground-based dierential photometry with the prototype photometer for the BRITE satellite mission is reported, using real sky observa- tions. The data were obtained with the photometer attached to a paramount tracking platform, using the Image Reduction and Analysis Facility Software (IRAF) image reduction and analysis methods as well as the author's own Matlab Code. Special emphasis is placed on the analysis of varying apertures for vary- ing point spread functions, which shows that the accuracy can be improved by taking into account the statistics for each star instead of using a xed aperture. In addition a function is dened, which describes the expected error in terms of instrumental magnitudes, taking into account Poisson distributed noise and mag- nitude independent noise, mainly aliasing. This function is then t to observed data in a two-dimensional least squares sense, providing a calculated aliasing error of 7 millimagnitudes. This function is furthermore rewritten in terms of the stan- dard magnitude B. A maximum magnitude can then be determined for a certain precision, which shows that the Bright Target Explorer (BRITE) can reach a pho- tometric error of 1 millimagnitude for stars with magnitude B < 3:5, assuming the worst case duty cycle of 15 minutes. / <p>Validerat; 20110211 (anonymous)</p>
|
8 |
The Adaptive Optics Lucky Imager : combining adaptive optics and lucky imagingCrass, Jonathan January 2014 (has links)
One of the highest resolution astronomical images ever taken in the visible were obtained by combining the techniques of adaptive optics and lucky imaging. The Adaptive Optics Lucky Imager (AOLI), being developed at Cambridge as part of a European collaboration, combines these two techniques in a dedicated instrument for the first time. The instrument is designed initially for use on the 4.2m William Herschel Telescope (WHT) on the Canary Island of La Palma. This thesis describes the development of AOLI, in particular the adaptive optics system and a new type of wavefront sensor, the non-linear curvature wavefront sensor (nlCWFS), being used within the instrument. The development of the nlCWFS has been the focus of my work, bringing the technique from a theoretical concept to physical realisation at the WHT in September 2013. The non-linear curvature wavefront sensor is based on the technique employed in the conventional curvature wavefront sensor where two image planes are located equidistant either side of a pupil plane. Two pairs of images are employed in the nlCWFS providing increased sensitivity to both high- and low- order wavefront distortions. This sensitivity is the reason the nlCWFS was selected for use with AOLI as it will provide significant sky-coverage using natural guide stars alone, mitigating the need for laser guide stars. This thesis is structured into three main sections; the first introduces the non-linear curvature wavefront sensor, the relevant background and a discussion of simulations undertaken to investigate intrinsic effects. The iterative reconstruction algorithm required for wavefront reconstruction is also introduced. The second section discusses the practical implementation of the nlCWFS using two demonstration systems as the precursor to the optical design used at the WHT and includes details of subsequent design changes. The final section discusses data from both the WHT and a laboratory setup developed at Cambridge following the observing run. The long-term goal for AOLI is to undertake science observations on the 10.4m Gran Telescopio Canarias, the world's largest optical telescope. The combination of AO and lucky imaging, when used on this telescope, will provide resolutions a factor of two higher than ever before achieved at visible wavelengths. This offers the opportunity to probe the Cosmos in unprecedented detail and has the potential to significantly advance our understanding of the Universe.
|
9 |
3D Vision Geometry for Rolling Shutter Cameras / Géométrie pour la vision 3D avec des caméras Rolling ShutterLao, Yizhen 16 May 2019 (has links)
De nombreuses caméras CMOS modernes sont équipées de capteurs Rolling Shutter (RS). Ces caméras à bas coût et basse consommation permettent d’atteindre de très hautes fréquences d’acquisition. Dans ce mode d’acquisition, les lignes de pixels sont exposées séquentiellement du haut vers le bas de l'image. Par conséquent, les images capturées alors que la caméra et/ou la scène est en mouvement présentent des distorsions qui rendent les algorithmes classiques au mieux moins précis, au pire inutilisables en raison de singularités ou de configurations dégénérées. Le but de cette thèse est de revisiter la géométrie de la vision 3D avec des caméras RS en proposant des solutions pour chaque sous-tâche du pipe-line de Structure-from-Motion (SfM).Le chapitre II présente une nouvelle méthode de correction du RS en utilisant les droites. Contrairement aux méthodes existantes, qui sont itératives et font l’hypothèse dite Manhattan World (MW), notre solution est linéaire et n’impose aucune contrainte sur l’orientation des droites 3D. De plus, la méthode est intégrée dans un processus de type RANSAC permettant de distinguer les courbes qui sont des projections de segments droits de celles qui correspondent à de vraies courbes 3D. La méthode de correction est ainsi plus robuste et entièrement automatisée.Le chapitre III revient sur l'ajustement faisceaux ou bundle adjustment (BA). Nous proposons un nouvel algorithme basé sur une erreur de projection dans laquelle l’index de ligne des points projetés varie pendant l’optimisation afin de garder une cohérence géométrique contrairement aux méthodes existantes qui considère un index fixe (celui mesurés dans l’image). Nous montrons que cela permet de lever la dégénérescence dans le cas où les directions de scan des images sont trop proches (cas très communs avec des caméras embraquées sur un véhicule par exemple). Dans le chapitre VI nous étendons le concept d'homographie aux cas d’images RS en démontrant que la relation point-à-point entre deux images d’un nuage de points coplanaires pouvait s’exprimer sous la forme de 3 à 7 matrices de taille 3X3 en fonction du modèle de mouvement utilisé. Nous proposons une méthode linéaire pour le calcul de ces matrices. Ces dernières sont ensuite utilisées pour résoudre deux problèmes classiques en vision par ordinateur à savoir le calcul du mouvement relatif et le « mosaïcing » dans le cas RS.Dans le chapitre V nous traitons le problème de calcul de pose et de reconstruction multi-vues en établissant une analogie avec les méthodes utilisées pour les surfaces déformables telles que SfT (Structure-from-Template) et NRSfM (Non Rigid Structure-from-Motion). Nous montrons qu’une image RS d’une scène rigide en mouvement peut être interprétée comme une image Global Shutter (GS) d’une surface virtuellement déformée (par l’effet RS). La solution proposée pour estimer la pose et la structure 3D de la scène est ainsi composée de deux étapes. D’abord les déformations virtuelles sont d’abord calculées grâce à SfT ou NRSfM en assumant un modèle GS classique (relaxation du modèle RS). Ensuite, ces déformations sont réinterprétées comme étant le résultat du mouvement durant l’acquisition (réintroduction du modèle RS). L’approche proposée présente ainsi de meilleures propriétés de convergence que les approches existantes. / Many modern CMOS cameras are equipped with Rolling Shutter (RS) sensors which are considered as low cost, low consumption and fast cameras. In this acquisition mode, the pixel rows are exposed sequentially from the top to the bottom of the image. Therefore, images captured by moving RS cameras produce distortions (e.g. wobble and skew) which make the classic algorithms at best less precise, at worst unusable due to singularities or degeneracies. The goal of this thesis is to propose a general framework for modelling and solving structure from motion (SfM) with RS cameras. Our approach consists in addressing each sub-task of the SfM pipe-line (namely image correction, absolute and relative pose estimation and bundle adjustment) and proposing improvements.The first part of this manuscript presents a novel RS correction method which uses line features. Unlike existing methods, which uses iterative solutions and make Manhattan World (MW) assumption, our method R4C computes linearly the camera instantaneous-motion using few image features. Besides, the method was integrated into a RANSAC-like framework which enables us to detect curves that correspond to actual 3D straight lines and reject outlier curves making image correction more robust and fully automated.The second part revisits Bundle Adjustment (BA) for RS images. It deals with a limitation of existing RS bundle adjustment methods in case of close read-out directions among RS views which is a common configuration in many real-life applications. In contrast, we propose a novel camera-based RS projection algorithm and incorporate it into RSBA to calculate reprojection errors. We found out that this new algorithm makes SfM survive the degenerate configuration mentioned above.The third part proposes a new RS Homography matrix based on point correspondences from an RS pair. Linear solvers for the computation of this matrix are also presented. Specifically, a practical solver with 13 point correspondences is proposed. In addition, we present two essential applications in computer vision that use RS homography: plane-based RS relative pose estimation and RS image stitching. The last part of this thesis studies absolute camera pose problem (PnP) and SfM which handle RS effects by drawing analogies with non-rigid vision, namely Shape-from-Template (SfT) and Non-rigid SfM (NRSfM) respectively. Unlike all existing methods which perform 3D-2D registration after augmenting the Global Shutter (GS) projection model with the velocity parameters under various kinematic models, we propose to use local differential constraints. The proposed methods outperform stat-of-the-art and handles configurations that are critical for existing methods.
|
10 |
Untersuchungen zur Optimierung eines Gammakameradetektors durch die Auswertung seiner verrauschten Antwort auf die Gammaquanten aus einer verfahrbaren Feinnadelstrahlquelle / Investigations for the optimization of a gamma camera detector based on the analysis of the stochastic answer on gamma quantas originating from a moving pencil beamEngeland, Uwe 31 January 2001 (has links)
No description available.
|
Page generated in 0.099 seconds