31 |
High-resolution imaging using a translating coded apertureMahalanobis, Abhijit, Shilling, Richard, Muise, Robert, Neifeld, Mark 22 August 2017 (has links)
It is well known that a translating mask can optically encode low-resolution measurements from which higher resolution images can be computationally reconstructed. We experimentally demonstrate that this principle can be used to achieve substantial increase in image resolution compared to the size of the focal plane array (FPA). Specifically, we describe a scalable architecture with a translating mask (also referred to as a coded aperture) that achieves eightfold resolution improvement (or 64: 1 increase in the number of pixels compared to the number of focal plane detector elements). The imaging architecture is described in terms of general design parameters (such as field of view and angular resolution, dimensions of the mask, and the detector and FPA sizes), and some of the underlying design trades are discussed. Experiments conducted with different mask patterns and reconstruction algorithms illustrate how these parameters affect the resolution of the reconstructed image. Initial experimental results also demonstrate that the architecture can directly support task-specific information sensing for detection and tracking, and that moving objects can be reconstructed separately from the stationary background using motion priors. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
|
32 |
Image Transfer Between Magnetic Resonance Images and Speech DiagramsWang, Kang 03 December 2020 (has links)
Realtime Magnetic Resonance Imaging (MRI) is a method used for human
anatomical study. MRIs give exceptionally detailed information about soft-tissue
structures, such as tongues, that other current imaging techniques cannot achieve.
However, the process requires special equipment and is expensive. Hence, it is not quite
suitable for all patients.
Speech diagrams show the side view positions of organs like the tongue, throat,
and lip of a speaking or singing person. The process of making a speech diagram is like
the semantic segmentation of an MRI, which focuses on the selected edge structure.
Speech diagrams are easy to understand with a clear speech diagram of the tongue and
inside mouth structure. However, it often requires manual annotation on the MRI
machine by an expert in the field.
By using machine learning methods, we achieved transferring images between
MRI and speech diagrams in two directions. We first matched videos of speech diagram
and tongue MRIs. Then we used various image processing methods and data
augmentation methods to make the paired images easy to train. We built our network
model inspired by different cross-domain image transfer methods and applied
reference-based super-resolution methods—to generate high-resolution images. Thus,
we can do the transferring work through our network instead of manually. Also,
generated speech diagram can work as an intermediary part to be transferred to other
medical images like computerized tomography (CT), since it is simpler in structure
compared to an MRI.
We conducted experiments using both the data from our database and other MRI
video sources. We use multiple methods to do the evaluation and comparisons with
several related methods show the superiority of our approach.
|
33 |
Super-resolution fluorescence imaging of membrane nanoscale architectures of hematopoietic stem cell homing and migration moleculesAbuZineh, Karmen 12 1900 (has links)
Recent development of super-resolution (SR) fluorescence microscopy techniques has provided a new tool for direct visualization of subcellular structures and their dynamics in cells. The homing of Hematopoietic stem/progenitor cells (HSPCs) to bone marrow is a multistep process that is initiated by tethering of HSPCs to endothelium and mediated by spatiotemporally organised ligand-receptor interactions of selectins expressed on endothelial cells to their ligands expressed on HSPCs which occurs against the shear stress exerted by blood flow. Although molecules and biological processes involved in this multi-step cellular interaction have been studied extensively, molecular mechanisms of the homing, in particular the nanoscale spatiotemporal behaviour of ligand-receptor interactions and their role in the cellular interaction, remain elusive. Using our new method of microfluidics-based super-resolution fluorescence imaging platform we can now characterize the correlation between both nanoscale ligand-receptor interactions and tethering/rolling of cells under external shear stress. We found that cell rolling on E-selectin caused significant reorganization of the nanoscale clustering behavior of CD44 and CD43, from a patchy clusters of ~ 200 nm in size to an elongated network-like structures where for PSGL-1 the clustering size did not change significantly as it was 85 nm and after cell rolling the PSGL-1 aggregated to one side or even exhibited an increase in the footprint. Furthermore, I have established the use of 3D SR images that indicated that the patchy clusters of CD44 localize to protruding structures of the cell surface. On the other hand, a significant amount of the network-like elongated CD44 clusters observed after the rolling were located in the close proximity to the E-selectin surface. The effect of the nanoscale reorganization of the clusters on the HSPC rolling over selectins is still an open question at this stage. Nevertheless, my results further demonstrate that this mechanical force-induced reorganisation is accompanied by a large structural reorganisation of actin cytoskeleton. Our microfluidics-based SR imaging also demonstrate an essential role of the nanoscale clustering of CD44 on stable rolling behaviours of cells. Our new experimental platform enhances understanding of the relationship between nanoscopic ligand-receptor interactions and macroscopic cellular interactions, providing a foundation for characterizing complicated HSPC homing
|
34 |
Rekonstrukce nekvalitních snímků obličejů / Facial image restorationBako, Matúš January 2020 (has links)
In this thesis, I tackle the problem of facial image super-resolution using convolutional neural networks with focus on preserving identity. I propose a method consisting of DPNet architecture and training algorithm based on state-of-the-art super-resolution solutions. The model of DPNet architecture is trained on Flickr-Faces-HQ dataset, where I achieve SSIM value 0.856 while expanding the image to four times the size. Residual channel attention network, which is one of the best and latest architectures, achieves SSIM value 0.858. While training models using adversarial loss, I encountered problems with artifacts. I experiment with various methods trying to remove appearing artefacts, which weren't successful so far. To compare quality assessment with human perception, I acquired image sequences sorted by percieved quality. Results show, that quality of proposed neural network trained using absolute loss approaches state-of-the-art methods.
|
35 |
Deep Learning for Advanced Microscopy / Apprentissage profond pour la microscopie avancéeOuyang, Wei 18 October 2018 (has links)
Contexte: La microscopie joue un rôle important en biologie depuis plusieurs siècles, mais sa résolution a longtemps été limitée à environ 250 nm, de sorte que nombre de structures biologiques (virus, vésicules, pores nucléaires, synapses) ne pouvaient être résolues. Au cours de la dernière décennie, plusieurs méthodes de super-résolution ont été développées pour dépasser cette limite. Parmi ces techniques, les plus puissantes et les plus utilisées reposent sur la localisation de molécules uniques (microscopie à localisation de molécule unique, ou SMLM), comme PALM et STORM. En localisant précisément les positions de molécules fluorescentes isolées dans des milliers d'images de basse résolution acquises de manière séquentielle, la SMLM peut atteindre des résolutions de 20 à 50 nm voire mieux. Cependant, cette technique est intrinsèquement lente car elle nécessite l’accumulation d’un très grand nombre d’images et de localisations pour obtenir un échantillonnage super-résolutif des structures fluorescentes. Cette lenteur (typiquement ~ 30 minutes par image super-résolutive) rend difficile l'utilisation de la SMLM pour l'imagerie cellulaire à haut débit ou en cellules vivantes. De nombreuses méthodes ont été proposées pour pallier à ce problème, principalement en améliorant les algorithmes de localisation pour localiser des molécules proches, mais la plupart de ces méthodes compromettent la résolution spatiale et entraînent l’apparition d’artefacts. Méthodes et résultats: Nous avons adopté une stratégie de transformation d’image en image basée sur l'apprentissage profond dans le but de restaurer des images SMLM parcimonieuses et par là d’améliorer la vitesse d’acquisition et la qualité des images super-résolutives. Notre méthode, ANNA-PALM, s’appuie sur des développements récents en apprentissage profond, notamment l’architecture U-net et les modèles génératifs antagonistes (GANs). Nous montrons des validations de la méthode sur des images simulées et des images expérimentales de différentes structures cellulaires (microtubules, pores nucléaires et mitochondries). Ces résultats montrent qu’après un apprentissage sur moins de 10 images de haute qualité, ANNA-PALM permet de réduire le temps d’acquisition d’images SMLM, à qualité comparable, d’un facteur 10 à 100. Nous avons également montré que ANNA-PALM est robuste à des altérations de la structure biologique, ainsi qu’à des changements de paramètres de microscopie. Nous démontrons le potentiel applicatif d’ANNA-PALM pour la microscopie à haut débit en imageant ~ 1000 cellules à haute résolution en environ 3 heures. Enfin, nous avons conçu un outil pour estimer et réduire les artefacts de reconstruction en mesurant la cohérence entre l’image reconstruite et l’image en épi-fluorescence. Notre méthode permet une microscopie super-résolutive plus rapide et plus douce, compatible avec l’imagerie haut débit, et ouvre une nouvelle voie vers l'imagerie super-résolutive des cellules vivantes. La performance des méthodes d'apprentissage profond augmente avec la quantité des données d’entraînement. Le partage d’images au sein de la communauté de microscopie offre en principe un moyen peu coûteux d’augmenter ces données. Cependant, il est souvent difficile d'échanger ou de partager des données de SMLM, car les tables de localisation seules ont souvent une taille de plusieurs gigaoctets et il n'existe pas de plate-forme de visualisation dédiée aux données SMLM. Nous avons développé un format de fichier pour compresser sans perte des tables de localisation, ainsi qu’une plateforme web (https://shareloc.xyz) qui permet de visualiser et de partager facilement des données SMLM 2D ou 3D. A l’avenir, cette plate-forme pourrait grandement améliorer les performances des modèles d'apprentissage en profondeur, accélérer le développement des outils, faciliter la réanalyse des données et promouvoir la recherche reproductible et la science ouverte. / Background: Microscopy plays an important role in biology since several centuries, but its resolution has long been limited to ~250nm due to diffraction, leaving many important biological structures (e.g. viruses, vesicles, nuclear pores, synapses) unresolved. Over the last decade, several super-resolution methods have been developed that break this limit. Among the most powerful and popular super-resolution techniques are those based on single molecular localization (single molecule localization microscopy, or SMLM) such as PALM and STORM. By precisely localizing positions of isolated fluorescent molecules in thousands or more sequentially acquired diffraction limited images, SMLM can achieve resolutions of 20-50 nm or better. However, SMLM is inherently slow due to the necessity to accumulate enough localizations to achieve high resolution sampling of the fluorescent structures. The drawback in acquisition speed (typically ~30 minutes per super-resolution image) makes it difficult to use SMLM in high-throughput and live cell imaging. Many methods have been proposed to address this issue, mostly by improving the localization algorithms to localize overlapping spots, but most of them compromise spatial resolution and cause artifacts.Methods and results: In this work, we applied deep learning based image-to-image translation framework for improving imaging speed and quality by restoring information from rapidly acquired low quality SMLM images. By utilizing recent advances in deep learning including the U-net and Generative Adversarial Networks, we developed our method Artificial Neural Network Accelerated PALM (ANNA-PALM) which is capable of learning structural information from training images and using the trained model to accelerate SMLM imaging by tens to hundreds folds. With experimentally acquired images of different cellular structures (microtubules, nuclear pores and mitochondria), we demonstrated that deep learning can efficiently capture the structural information from less than 10 training samples and reconstruct high quality super-resolution images from sparse, noisy SMLM images obtained with much shorter acquisitions than usual for SMLM. We also showed that ANNA-PALM is robust to possible variations between training and testing conditions, due either to changes in the biological structure or to changes in imaging parameters. Furthermore, we take advantage of the acceleration provided by ANNA-PALM to perform high throughput experiments, showing acquisition of ~1000 cells at high resolution in ~3 hours. Additionally, we designed a tool to estimate and reduce possible artifacts is designed by measuring the consistency between the reconstructed image and the experimental wide-field image. Our method enables faster and gentler imaging which can be applied to high-throughput, and provides a novel avenue towards live cell high resolution imaging. Deep learning methods rely on training data and their performance can be improved even further with more training data. One cheap way to obtain more training data is through data sharing within the microscopy community. However, it often difficult to exchange or share localization microscopy data, because localization tables alone are typically several gigabytes in size, and there is no dedicated platform for localization microscopy data which provide features such as rendering, visualization and filtering. To address these issues, we developed a file format that can losslessly compress localization tables into smaller files, alongside with a web platform called ShareLoc (https://shareloc.xyz) that allows to easily visualize and share 2D or 3D SMLM data. We believe that this platform can greatly improve the performance of deep learning models, accelerate tool development, facilitate data re-analysis and further promote reproducible research and open science.
|
36 |
Compressive Point Cloud Super ResolutionSmith, Cody S. 01 August 2012 (has links)
Automatic target recognition (ATR) is the ability for a computer to discriminate between different objects in a scene. ATR is often performed on point cloud data from a sensor known as a Ladar. Increasing the resolution of this point cloud in order to get a more clear view of the object in a scene would be of significant interest in an ATR application.
A technique to increase the resolution of a scene is known as super resolution. This technique requires many low resolution images that can be combined together. In recent years, however, it has become possible to perform super resolution on a single image. This thesis sought to apply Gabor Wavelets and Compressive Sensing to single image super resolution of digital images of natural scenes. The technique applied to images was then extended to allow the super resolution of a point cloud.
|
37 |
High-Resolution X-Ray Image Generation from CT Data Using Super-ResolutionMa, Qing 04 October 2021 (has links)
Synthetic X-ray or digitally reconstructed radiographs (DRRs) are simulated X-ray images projected from computed tomography (CT) data that are commonly used for CT and real X-Ray image registration. High-quality synthetic X-ray images can facilitate various applications such as guiding images for virtual reality (VR) simulation and training data for deep learning methods such as creating CT data from X-Ray images.
It is challenging to generate high-quality synthetic X-ray images from CT slices, especially in various view angles, due to gaps between CT slices, high computational cost, and the complexity of algorithms. Most synthetic X-ray generation methods use fast ray-tracing in a situation where the image quality demand is low. We aim to improve image quality while maintaining good accuracy and use two steps; 1) to generate synthetic X-ray images from CT data and 2) to increase the resolution of the synthetic X-ray images.
Our synthetic X-ray image generation method adopts a matrix-based projection method and dynamic multi-segment lookup tables, which shows better image quality and efficiency compared to conventional synthetic X-ray image generation methods. Our method is tested in a real-time VR training system for image-guided intervention procedures.
Then we proposed two novel approaches to raise the quality of synthetic X-ray images through deep learning methods. We use a reference-based super-resolution (RefSR) method as a base model to upsampling low-resolution images into higher resolution. Even though RefSR can produce fine details by utilizing the reference image, it inevitably generates some artifacts and noise. We propose texture transformer super-resolution with frequency domain (TTSR-FD) which introduces frequency domain loss as a constraint to improve the quality of the RefSR results with fine details and without apparent artifacts. To the best of our knowledge, this is the first work that utilizes frequency domain as a part of loss functions in the field of super-resolution (SR). We observe improved performance in evaluating TTSR-FD when tested on our synthetic X-ray and real X-ray image datasets.
A typical SR network is trained with paired high-resolution (HR) and low-resolution (LR) images, where LR images are created by downsampling HR images using a specific kernel. The same downsampling kernel is also used to create test LR images from HR images. As a result, most SR methods only perform well when the testing image is acquired using the same downsampling kernel used during the training process. We also propose TTSR-DMK, which uses multiple downsampling kernels during training to generalize the model and adopt a dual model that trains together with the main model. The dual model can form a closed-loop with the main model to learn the inverse mapping, which further improves the model’s performance. Our method works well for testing images produced by multiple kernels used during training. It can also help improve the model performance when testing images are acquired with kernels not used during training. To the best of our knowledge, we are the first to use the closed-loop method in RefSR.
We have achieved: (i) synthetic X-ray image generation from CT data, which is based on a matrix-based projection and lookup tables ; (ii) TTSR-FD: synthetic X-ray image super-resolution using a novel frequency domain loss ; (iii) TTSR-DMK: an adaptation network to overcome the performance drop for testing data which do not match to downsampling kernels used in training.
Our TTSR-FD results show improvements (PSNR from 37.953 to 39. 009) compared to the state-of-the-art methods TTSR. Our experiment with real X-Ray images using TTSR-FD can remove visible artifacts in the qualitative study even though PSNR is similar. Our proposed adaptation network, TTSR-DMK, improved model performance for multiple kernels even with unknown kernel situations.
|
38 |
Super-Resolution via Image Recapture and Bayesian Effect ModelingToronto, Neil B. 11 March 2009 (has links) (PDF)
The goal of super-resolution is to increase not only the size of an image, but also its apparent resolution, making the result more plausible to human viewers. Many super-resolution methods do well at modest magnification factors, but even the best suffer from boundary and gradient artifacts at high magnification factors. This thesis presents Bayesian edge inference (BEI), a novel method grounded in Bayesian inference that does not suffer from these artifacts and remains competitive in published objective quality measures. BEI works by modeling the image capture process explicitly, including any downsampling, and modeling a fictional recapture process, which together allow principled control over blur. Scene modeling requires noncausal modeling within a causal framework, and an intuitive technique for that is given. Finally, BEI with trivial changes is shown to perform well on two tasks outside of its original domain—CCD demosaicing and inpainting—suggesting that the model generalizes well.
|
39 |
WAVE PROPAGATION THROUGH MULTI-LAYER METALLO-DIELECTRICS: APPLICATION TO SUPER-RESOLUTIONSerushema, Jean Bosco 12 August 2010 (has links)
No description available.
|
40 |
Super-resolution and Nonlinear Absorption with Metallodielectric StacksKatte, Nkorni January 2011 (has links)
No description available.
|
Page generated in 0.1025 seconds