Spelling suggestions: "subject:"superresolution"" "subject:"superrésolution""
121 |
FROM SEEING BETTER TO UNDERSTANDING BETTER: DEEP LEARNING FOR MODERN COMPUTER VISION APPLICATIONSTianqi Guo (12890459) 17 June 2022 (has links)
<p>In this dissertation, we document a few of our recent attempts in bridging the gap between the fast evolving deep learning research and the vast industry needs for dealing with computer vision challenges. More specifically, we developed novel deep-learning-based techniques for the following application-driven computer vision challenges: image super-resolution with quality restoration, motion estimation by optical flow, object detection for shape reconstruction, and object segmentation for motion tracking. Those four topics cover the computer vision hierarchy from the low level where digital images are processed to restore missing information for better human perception, to middle level where certain objects of interest are recognized and their motions are analyzed, finally to high level where the scene captured in the video footage will be interpreted for further analysis. In the process of building the whole-package of ready-to-deploy solutions, we center our efforts on designing and training the most suitable convolutional neural networks for the particular computer vision problem at hand. Complementary procedures for data collection, data annotation, post-processing of network outputs tailored for specific application needs, and deployment details will also be discussed where necessary. We hope our work demonstrates the applicability and versatility of convolutional neural networks for real-world computer vision tasks on a broad spectrum, from seeing better to understanding better.</p>
|
122 |
Deep upscaling for video streaming : a case evaluation at SVT.Lundkvist, Fredrik January 2021 (has links)
While digital displays have continuously increased in resolution, video content produced before these improvements is however stuck at its original resolution, and the use of some form of scaling is needed for a satisfactory viewing experience on high-resolution displays. In recent years, the field of video scaling has taken a leap forward in output quality, due to the adoption of deep learning methods in research. In this paper, we describe a study wherein we train a convolutional neural network for super-resolution, and conduct a large-scale A/B video quality test in order to investigate if SVT video-ondemand viewers prefer video upscaled using a convolutional neural network to video upscaled using the standard bicubic method. Our results show that viewers generally prefer CNNscaled video, but not necessarily for the types of content this technology would primarily be used to scale. We conclude that the technology of deep upscaling shows promise, but also believe that more optimization and flexibility is need for deep scaling to be viable for mainstream use. / Allteftersom bildskärmstekniken förbättras så får mediekonsumenter tillgång till skärmar med allt högre upplösningar; dock är videomaterial som producerats för en viss bildupplösning, fast på denna nivå, och någon form av skalning måste användas för en bra tittarupplevelse på högupplösta skärmar. På senare tid så har videoskalning förändrats, tack vare användandet av djupinlärningsmetoder inom forskningen. I den här rapporten beskriver vi en studie där vi tränade en djup modell för videouppskalning, och sedan utförde ett storskaligt A/B-test, med syftet att undersöka huruvida SVTs onlinetittare föredrar video skalad med djupinlärning över video skalad med konventionella metoder. Våra resultat visar att tittarna föredrog video skalad med djupinlärning, dock inte nödvändigtvis för det material tekniken främst skulle användas med. Vi drar slutsatsen att videoskalning med hjälp av djupinlärning är lovande, men anser också att mer optimering och flexibilitet behövs innan tekniken kan anses mogen för bred adoption.
|
123 |
Automatisierte Aufbereitung archivierter VHS-Digitalisate durch künstliche neuronale Netze zum Zweck der WiederausstrahlungMüller, Stefanie, Kahl, Stefan, Eibl, Maximilian 16 October 2017 (has links)
Videoaufnahmen aus den vergangenen Jahrzehnten stellen kulturelles Erbe dar. Diese sind jedoch nach heutigen Sehgewohnheiten nicht ohne große Einschränkungen für die Wiederausstrahlung geeignet. Das liegt zum einen an längst vergangenen Standards der Videoaufzeichnung, aber zum anderen auch in großem Maße an unkontrolliert gealterten Speichermedien durch inadäquate Aufbewahrung. Oftmals war es lokalen Fernsehsendern technisch nicht möglich ihre Archivbestände unter optimalen klimatischen Bedingungen langlebig zu lagern. Videoarchivdaten nach der Digitalisierung für die Einbindung in heutige Produktionen manuell zu durchsuchen und entsprechend aufzubereiten, ist ein zeitaufwändiger Prozess, den lokale TV-Sender nicht bewältigen können.
In unserem Beitrag möchten wir neuartige Methoden der automatisierten Aufbereitung von archivierten VHS-Digitalisaten für die Wiederausstrahlung vorstellen. Dazu zählen vor allem Verfahren zu den Schwerpunkten der Korrektur von Falschfarben (Recoloring) und zur Steigerung der Auflösung von ehemals PAL zu Full-HD und Ultra-HD (Super-Resolution). Zum Einsatz kommen dabei künstliche neuronale Netze, die anders als klassische Verfahren der Bildverarbeitung, semantische Bildkomponenten erfassen und bei der Bearbeitung berücksichtigen können. Mitunter können so deutliche Qualitätsverbesserungen erzielt werden. In unserem Beitrag möchten wir auf Chancen und aktuelle Beschränkungen dieser Technologien eingehen und anhand von digitalisierten Videoarchivdaten deren Funktionsweise demonstrieren.
|
124 |
Multi-Resolution Data Fusion for Super Resolution of Microscopy ImagesEmma J Reid (11161374) 21 July 2021 (has links)
<p>Applications in materials and biological imaging are currently limited by the ability to collect high-resolution data over large areas in practical amounts of time. One possible solution to this problem is to collect low-resolution data and apply a super-resolution interpolation algorithm to produce a high-resolution image. However, state-of-the-art super-resolution algorithms are typically designed for natural images, require aligned pairing of high and low-resolution training data for optimal performance, and do not directly incorporate a data-fidelity mechanism.</p><p><br></p><p>We present a Multi-Resolution Data Fusion (MDF) algorithm for accurate interpolation of low-resolution SEM and TEM data by factors of 4x and 8x. This MDF interpolation algorithm achieves these high rates of interpolation by first learning an accurate prior model denoiser for the TEM sample from small quantities of unpaired high-resolution data and then balancing this learned denoiser with a novel mismatched proximal map that maintains fidelity to measured data. The method is based on Multi-Agent Consensus Equilibrium (MACE), a generalization of the Plug-and-Play method, and allows for interpolation at arbitrary resolutions without retraining. We present electron microscopy results at 4x and 8x super resolution that exhibit reduced artifacts relative to existing methods while maintaining fidelity to acquired data and accurately resolving sub-pixel-scale features.</p>
|
125 |
Novel Techniques in Quantum Optics: Confocal Super-Resolution Microscopy Based on a Spatial Mode Sorter and Herriott Cell as an Image-Preserving Delay LineBearne, Katherine Karla Misaye 18 May 2022 (has links)
Breaking Rayleigh’s "curse" and resolving infinitely small spatial separations is one motivation for developing super-resolution in imaging systems. It has been shown that an arbitrarily small distance between two incoherent point sources can be resolved through the use of a spatial mode sorter, by treating it as a parameter estimation problem. However, when extending this method to general objects with many point sources, the added complexity of multi-parameter estimation problems makes resolution of general objects quite challenging. In the first part of this thesis, we propose a new approach to deal with this problem by generalizing the Richardson-Lucy (RL) deconvolution algorithm to accept the outputs from a mode sorter. We simulate the application of this algorithm to an incoherent confocal microscope using a Zernike spatial mode sorter rather than the conventional pinhole. Our method can resolve general scenes with arbitrary geometry. For such spatially incoherent objects, we find that the resolution enhancement of the sorter-based microscopy using the generalized RL algorithm is over 30% higher than the conventional confocal approach using the standard RL algorithm. This method is quite simple and potentially can be used for various applications including fluorescence microscopy. It could also be combined with other super-resolution techniques for enhanced results. The second part of this thesis explores the potential for the Herriott cell to be used as an image-preserving delay line. In quantum imaging, entangled photons are often utilized to take advantage of their spatial and temporal correlations. One photon (“the signal”) interacts with the system to be measured and the other (“the herald”) is used to trigger the detection of the signal. However, for a typical high-sensitivity camera, there is a delay on the order of 20 ns between the trigger and the sensor becoming active allowing for the signal to be recorded. An image-preserving delay line (IPDL) serves to store a photon without distorting the spatial structure and losing the spatial and temporal correlations. It is commonly made with a series of 4f systems to repeatedly image the light field. We propose to use the Herriott cell as an image-preserving delay line. We tested 10 of the lower-order HG modes and found it was able to preserve almost all of them with high fidelities (>90%), with the only exceptions being the largest modes (HG03 and HG30) at the longest delay (7.9 m) where the fidelity was still >86%. In addition to these modes, it was also able to store general images. This application of the Herriott cell affords insights into miniaturizing IPDLs, which tend to occupy a significant amount of space. Overall, these two projects offer novel insight and application to the world of quantum imaging.
|
126 |
Compressive Transient ImagingSun, Qilin 04 1900 (has links)
High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera's detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns
to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction
algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its
reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements.
In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It's a breakthrough in the areas of both transient imaging and compressive sensing.
|
127 |
FUZZY MARKOV RANDOM FIELDS FOR OPTICAL AND MICROWAVE REMOTE SENSING IMAGE ANALYSIS : SUPER RESOLUTION MAPPING (SRM) AND MULTISOURCE IMAGE CLASSIFICATION (MIC) / ファジーマルコフ確率場による光学およびマイクロ波リモートセンシング画像解析 : 超解像度マッピングと複数センサ画像分類Duminda Ranganath Welikanna 24 September 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第18561号 / 工博第3922号 / 新制||工||1603(附属図書館) / 31461 / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 田村 正行, 准教授 須﨑 純一, 准教授 田中 賢治 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
128 |
YOYO and POPO Dye Photophysics for Super-Resolution Optical NanoscopyPyle, Joseph R. 23 September 2019 (has links)
No description available.
|
129 |
THE APPLICATION OF SINGLE-POINT EDGE-EXCITATION SUB-DIFFRACTION MICROSCOPY FOR THE STUDY OF MACROMOLECULAR TRANSPORTTingey, Mark, 0000-0002-0365-5585 January 2023 (has links)
The development of super-resolution microscopy made it possible to surpass the diffraction limit of optical microscopy, enabling researchers to gain a nanometer scale understanding of cellular structures. While many applications have benefited from standard super-resolution microscopy, gaps remained making high-speed dynamic imaging in live cells impossible. To address this problem, single-point edge-excitation sub-diffraction (SPEED) microscopy was developed. This methodology enables the nanometer imaging of dynamic cell processes within live cells, the evaluation of subcellular structural information, the capacity to derive three-dimensional information from two-dimensional images within rotationally symmetric structures, and the interrogation of novel questions regarding the transport dynamics of macromolecules in a variety of cellular structures. Here, I have described the theory and method behind the current iteration of SPEED microscopy that we have developed and validated via Monte Carlo simulation. Further, a detailed description of how we have further developed SPEED microscopy to derive structural information within the nuclear pore complex as well as how SPEED has been applied to evaluate the export kinetics of mRNA. / Biology
|
130 |
Generalized super-resolution of 4D Flow MRI : extending capabilities using ensemble learning / Allmän superupplösning av 4D MRI Flöde : utvidgad användning genom ensemblelärandeHjalmarsson, Adam, Ericsson, Leon January 2023 (has links)
4D Flow Magnet Resonance Imaging (4D Flow MRI) is a novel non-invasive technique for imaging of cardiovascular blood flow. However, when utilized as a stand-alone analysis method, 4D Flow MRI has certain limitations including limited spatial resolution and noise artefacts, motivating the application of dedicated post-processing tools. Learning based super-resolution (SR) has here emerged as a promising utility for such work, however, more often than not, these efforts have been constricted to a narrowly defined cardiovascular domain. Rather, there has been limited exploration of how learned super-resolution models perform across \emph{multiple} cardiovascular compartments, with the wide range of hemodynamic compartments covered by the cardiovasculature as an apparent challenge. To address this, we investigate the generalizability of 4D Flow MRI super-resolution using ensemble learning. Our investigation employs ensemble learning techniques, specifically bagging and stacking, with a convolutional neural network (4DFlowNet) serving as the framework for all base learners. To assist in training, synthetic training data was extracted from patient-specific, physics-based velocity fields derived from computational fluid dynamic (CFD) simulations conducted in three key compartments: the aorta, brain and the heart. Varying base and ensemble networks were then trained on pairs of high-resolution and low-resolution synthetic data, with performance quantitatively assessed as a function of cardiovascular domain, and specific architecture. To ensure clinical relevance, we also evaluated model performance on clinically acquired MRI data from the very same three compartments. We find that ensemble models improve performance, as compared to isolated equivalents. Our ensemble model \textit{Stacking Block-3}, improves in-silico error rate by $16.22$ points across the average domain. Additionally, performance on the aorta, brain and heart improves by $2.66$, $5.81$ and $2.00$ points respectively. Employing both qualitative and quantitative evaluation methods on the in-vivo data, we find that ensemble models produce super-resolved velocity fields that are quantitatively coherent with ground truth reference data and visually pleasing. To conclude, ensemble learning has shown potential in generalizing 4D Flow MRI across multiple cardiovascular compartments.
|
Page generated in 0.0969 seconds