• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 27
  • 10
  • 10
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 305
  • 305
  • 102
  • 92
  • 73
  • 55
  • 46
  • 45
  • 42
  • 40
  • 39
  • 31
  • 29
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Adaptive optics stimulated emission depletion microscope for thick sample imaging

Zdankowski, Piotr January 2018 (has links)
Over the past few decades, fluorescence microscopy has proven to become the most widely used imaging technique in the field of life sciences. Unfortunately, all classical optical microscopy techniques have one thing in common: their resolution is limited by the diffraction. Thankfully, due to the very strong interest, development of fluorescent microscopy techniques is very intense, with novel solutions surfacing repeatedly. The major breakthrough came with the appearance of super-resolution microscopy techniques, enabling imaging well below the diffraction barrier and opening the new era of nanoscopy. Among the fluorescent super-resolution techniques, Stimulated Emission Depletion (STED) microscopy has been particularly interesting, as it is a purely optical technique which does not require post image processing. STED microscopy has proven to resolve structures down to the molecular resolution. However, super-resolution microscopy is not a cure to all the problems and it also has its limits. What has shown to be particularly challenging, was the super-resolution imaging of thick samples. With increased thickness of biological structures, the aberrations increase and signal-to-noise (SNR) decreases. This becomes even more evident in the super-resolution imaging, as the nanoscopic techniques are especially sensitive to aberrations and low SNR. The aim of this work is to propose and develop a 3D STED microscope that can successfully image thick biological samples with nanoscopic resolution. In order to achieve that, adaptive optics (AO) has been employed for correcting the aberrations, using the indirect wavefront sensing approach. This thesis presents a custom built 3D STED microscope with the AO correction and the resulting images of thick samples with resolution beyond diffraction barrier. The developed STED microscope achieved the resolution of 60nm in lateral and 160nm in axial direction. What is more, it enabled super-resolution imaging of thick, aberrating samples. HeLa, RPE-1 cells and dopaminergic neuron differentiated from human IPS cells were imaged using the microscope. The results shown in this thesis present 3D STED imaging of thick biological samples and, what is particularly worth to highlight, 3D STED imaging at the 80μm depth, where the excitation and depletion beams have to propagate through the thick layer of tissue. 3D STED images at such depth has not been reported up to date.
72

Convex and non-convex optimizations for recovering structured data: algorithms and analysis

Cho, Myung 15 December 2017 (has links)
Optimization theories and algorithms are used to efficiently find optimal solutions under constraints. In the era of “Big Data”, the amount of data is skyrocketing,and this overwhelms conventional techniques used to solve large scale and distributed optimization problems. By taking advantage of structural information in data representations, this thesis offers convex and non-convex optimization solutions to various large scale optimization problems such as super-resolution, sparse signal processing,hypothesis testing, machine learning, and treatment planning for brachytherapy. Super-resolution: Super-resolution aims to recover a signal expressed as a sum of a few Dirac delta functions in the time domain from measurements in the frequency domain. The challenge is that the possible locations of the delta functions are in the continuous domain [0,1). To enhance recovery performance, we considered deterministic and probabilistic prior information for the locations of the delta functions and provided novel semidefinite programming formulations under the information. We also proposed block iterative reweighted methods to improve recovery performance without prior information. We further considered phaseless measurements, motivated by applications in optic microscopy and x-ray crystallography. By using the lifting method and introducing the squared atomic norm minimization, we can achieve super-resolution using only low frequency magnitude information. Finally, we proposed non-convex algorithms using structured matrix completion. Sparse signal processing: L1 minimization is well known for promoting sparse structures in recovered signals. The Null Space Condition (NSC) for L1 minimization is a necessary and sufficient condition on sensing matrices such that a sparse signal can be uniquely recovered via L1 minimization. However, verifying NSC is a non-convex problem and known to be NP-hard. We proposed enumeration-based polynomial-time algorithms to provide performance bounds on NSC, and efficient algorithms to verify NSC precisely by using the branch and bound method. Hypothesis testing: Recovering statistical structures of random variables is important in some applications such as cognitive radio. Our goal is distinguishing two different types of random variables among n>>1 random variables. Distinguishing them via experiments for each random variable one by one takes lots of time and efforts. Hence, we proposed hypothesis testing using mixed measurements to reduce sample complexity. We also designed efficient algorithms to solve large scale problems. Machine learning: When feature data are stored in a tree structured network having time delay in communication, quickly finding an optimal solution to the regularized loss minimization is challenging. In this scenario, we studied a communication-efficient stochastic dual coordinate ascent and its convergence analysis. Treatment planning: In the Rotating-Shield Brachytherapy (RSBT) for cancer treatment, there is a compelling need to quickly obtain optimal treatment plans to enable clinical usage. However, due to the degree of freedom in RSBT, finding optimal treatment planning is difficult. For this, we designed a first order dose optimization method based on the alternating direction method of multipliers, and reduced the execution time around 18 times compared to the previous research.
73

Cargo Transport By Myosin Va Molecular Motors Within Three-Dimensional In Vitro Models Of The Intracellular Actin Cytoskeletal Network

Lombardo, Andrew Thomas 01 January 2018 (has links)
Intracellular cargo transport involves the movement of critical cellular components (e.g. vesicles, organelles, mRNA, chromosomes) along cytoskeletal tracks by tiny molecular motors. Myosin Va motors have been demonstrated to play a vital role in the transport of cargos destined for the cell membrane by navigating their cargos through the three-dimensional actin networks of the cell. Transport of cargo through these networks presents many challenges, including directional and physical obstacles which teams of myosin Va-bound to a single cargo must overcome. Specifically, myosin Va motors are presented with numerous actin-actin intersections and dense networks of filaments which can act as a physical barrier to transport. Due to the complexities of studying myosin Va cargo transport in cells, much effort has been focused on the in vitro observation and analysis of myosin Va transport along single actin filaments or simple actin cytoskeletal models. However, these model systems often rely on non-physiological cargos (e.g. beads, quantum dots) and two-dimensional arrangements of actin attached to glass surfaces. Interestingly, a disconnect exists between the transport of cargo on these simple model systems and studies of myosin Va transport on suspended 3D actin arrangements or cellular networks which show longer run lengths, increased velocities, and straighter, more directed trajectories. One solution to this discrepancy is that the cell may use the fluidity of the cargo surface, the recruitment of myosin Va motor teams, and the 3D geometry of the actin, to finely tune the transport of intracellular cargo depending on cellular need. To understand how myosin Va motors transport their cargo through 3D networks of actin, we investigated myosin Va motor ensembles transporting fluorescent 350 nm lipid-bilayer cargo through arrangements of suspended 3D actin filaments. This was accomplished using single molecule fluorescent imaging, three-dimensional super resolution Stochastic Optical Reconstruction Microscopy (STORM), optical tweezers, and in silico modeling. We found that when moving along 3D actin filaments, myosin motors could be recruited from across the fluid lipid cargo’s surface to the filaments which enabled dynamic teams to be formed and explore the full actin filaments binding landscape. When navigating 3D actin-actin intersections these teams capable of maneuvering their cargo through the intersection in a way that encouraged the vesicles to continue straight rather than switch filaments and turn at the intersection. We hypothesized that this finding may be the source of the relatively straight directed runs by myosin Va-bound cargo observed in living cells. To test this, we designed 3D actin networks where the vesicles interacted with 2-6 actin filaments simultaneously. Actin forms polarized filaments, which, in cells, generally have their plus-ends facing the exterior of the cell; the same direction in which myosin Va walks. We found that to maintain straight directed trajectories and not become stationary within the network, vesicles needed to move along filaments with a bias in their polarity. This allows for cargo-bound motors to align their motion along the polarized networks and produced productive motion despite physical and directional obstacles. Together this work demonstrates the physical properties of the cargo, the geometric arrangement of the actin, and the mechanical properties of the motor are all critical aspects of a robust myosin Va transport system.
74

Off-the-grid compressive imaging

Ongie, Gregory John 01 August 2016 (has links)
In many practical imaging scenarios, including computed tomography and magnetic resonance imaging (MRI), the goal is to reconstruct an image from few of its Fourier domain samples. Many state-of-the-art reconstruction techniques, such as total variation minimization, focus on discrete ‘on-the-grid” modelling of the problem both in spatial domain and Fourier domain. While such discrete-to-discrete models allow for fast algorithms, they can also result in sub-optimal sampling rates and reconstruction artifacts due to model mismatch. Instead, this thesis presents a framework for “off-the-grid”, i.e. continuous domain, recovery of piecewise smooth signals from an optimal number of Fourier samples. The main idea is to model the edge set of the image as the level-set curve of a continuous domain band-limited function. Sampling guarantees can be derived for this framework by investigating the algebraic geometry of these curves. This model is put into a robust and efficient optimization framework by posing signal recovery entirely in Fourier domain as a structured low-rank (SLR) matrix completion problem. An efficient algorithm for this problem is derived, which is an order of magnitude faster than previous approaches for SLR matrix completion. This SLR approach based on off-the-grid modeling shows significant improvement over standard discrete methods in the context of undersampled MRI reconstruction.
75

New approaches in super-resolution microscopy / Nouvelles approches microscope de super-résolution

Yang, Bin 13 April 2015 (has links)
La première méthode vise à améliorer la vitesse d’imagerie de la microscopie super-résolue àtempérature ambiante pour des applications biologiques. En tant qu’une technique de scan, lamicroscopie STED a besoins d’être parallélisé pour faire de l’imagerie rapide en champ large. Nousavons obtenu une parallélisation massive de la microscopie STED en utilisant les réseaux d’optiqueavec une excitation en en champ large et une caméra rapide pour détection. Les images super-résoluesd’un champ de 3 μm par 3 μm sont acquises en scannant une maille élémentaire du réseau optique, quipeut être aussi petite que 290 nm * 290 nm. La microscopie Lattice-STED est démontrée avec unerésolution allant jusqu'à 70 nm à une cadence de 12,5 images par seconde.La deuxième méthode étend la microscopie super-résolue à la température de l’hélium liquide pourdes applications aux technologies quantiques. Des résolutions optiques à l'échelle nanométrique desémetteurs quantique est une étape cruciale vers le contrôle des états délocalisés formés par lesinteractions fortes et cohérentes entre des émetteurs. Dans ce contexte, nous avons développé unetechnique de microscopie à des températures cryogéniques, dénommée la microscopie Essat. Cettetechnique est basée sur la saturation optique de l'état excité des molécules fluorescentes uniques parl’excitation d’un faisceau en forme d’anneau. Une résolution moins de 10 nm est obtenue avec debasses intensités d'excitation, plus de millions de fois plus faibles que celles utilisées dans lamicroscopie STED à la température ambiante. Par rapport aux approches basées sur la superlocalisation,notre technique offre une occasion unique de résoudre sous la limite de diffraction lesmolécules uniques ayant des fréquences de résonance optiques qui se chevauchent. Ceci ouvre la voieà l'étude des interactions cohérentes entre émetteurs uniques et à la manipulation de leur degréd'intrication. / The first technique aims at improving the imaging speed of super-resolution microscopy at roomtemperature for biological applications. As a scanning technique, STED (Stimulated EmissionDepletion) microscopy needs parallelization for fast wide-field imaging. Using well-designed opticallattices for depletion together with wide-field excitation and a fast camera for detection, we achievelarge parallelization of STED microscopy. Wide field of view super-resolved images are acquired byscanning over a single unit cell of the optical lattice, which can be as small as 290 nm * 290 nm.Lattice-STED imaging is demonstrated with a resolution down to 70 nm at 12.5 frames per second.The second one extends super-resolution microscopy to liquid helium temperature for applications inquantum technologies. Optical resolution of solid-state single quantum emitters at the nanometer scaleis a challenging step towards the control of delocalized states formed by strongly and coherentlyinteracting emitters. ESSat (Excited State Saturation) microscopy operating at cryogenic temperaturesis based on optical saturation of the excited state of single fluorescent molecules with a doughnutshapedbeam. Sub-10 nm resolution is achieved with extremely low excitation intensities, more thanmillion times lower than those used in room temperature STED microscopy. Compared to superlocalisationapproaches, our technique offers a unique opportunity to super-resolve single moleculeshaving overlapping optical resonance frequencies, paving the way to the study of coherent interactionsbetween single emitters and to the manipulation of their degree of entanglement.
76

Restoration super-resolution of image sequences : application to TV archive documents / Restauration super-résolution de séquences d'images : applications aux documents d'archives TV

Abboud, Feriel 15 December 2017 (has links)
Au cours du dernier siècle, le volume de vidéos stockées chez des organismes tel que l'Institut National de l'Audiovisuel a connu un grand accroissement. Ces organismes ont pour mission de préserver et de promouvoir ces contenus, car, au-delà de leur importance culturelle, ces derniers ont une vraie valeur commerciale grâce à leur exploitation par divers médias. Cependant, la qualité visuelle des vidéos est souvent moindre comparée à celles acquises par les récents modèles de caméras. Ainsi, le but de cette thèse est de développer de nouvelles méthodes de restauration de séquences vidéo provenant des archives de télévision française, grâce à de récentes techniques d'optimisation. La plupart des problèmes de restauration peuvent être résolus en les formulant comme des problèmes d'optimisation, qui font intervenir plusieurs fonctions convexes mais non-nécessairement différentiables. Pour ce type de problèmes, on a souvent recourt à un outil efficace appelé opérateur proximal. Le calcul de l'opérateur proximal d'une fonction se fait de façon explicite quand cette dernière est simple. Par contre, quand elle est plus complexe ou fait intervenir des opérateurs linéaires, le calcul de l'opérateur proximal devient plus compliqué et se fait généralement à l'aide d'algorithmes itératifs. Une première contribution de cette thèse consiste à calculer l'opérateur proximal d'une somme de plusieurs fonctions convexes composées avec des opérateurs linéaires. Nous proposons un nouvel algorithme d'optimisation de type primal-dual, que nous avons nommé Algorithme Explicite-Implicite Dual par Blocs. L'algorithme proposé permet de ne mettre à jour qu'un sous-ensemble de blocs choisi selon une règle déterministe acyclique. Des résultats de convergence ont été établis pour les deux suites primales et duales de notre algorithme. Nous avons appliqué notre algorithme au problème de déconvolution et désentrelacement de séquences vidéo. Pour cela, nous avons modélisé notre problème sous la forme d'un problème d'optimisation dont la solution est obtenue à l'aide de l'algorithme explicite-implicite dual par blocs. Dans la deuxième partie de cette thèse, nous nous sommes intéressés au développement d'une version asynchrone de notre l'algorithme explicite-implicite dual par blocs. Dans cette nouvelle extension, chaque fonction est considérée comme locale et rattachée à une unité de calcul. Ces unités de calcul traitent les fonctions de façon indépendante les unes des autres. Afin d'obtenir une solution de consensus, il est nécessaire d'établir une stratégie de communication efficace. Un point crucial dans le développement d'un tel algorithme est le choix de la fréquence et du volume de données à échanger entre les unités de calcul, dans le but de préserver de bonnes performances d'accélération. Nous avons évalué numériquement notre algorithme distribué sur un problème de débruitage de séquences vidéo. Les images composant la vidéo sont partitionnées de façon équitable, puis chaque processeur exécute une instance de l'algorithme de façon asynchrone et communique avec les processeurs voisins. Finalement, nous nous sommes intéressés au problème de déconvolution aveugle, qui vise à estimer le noyau de convolution et la séquence originale à partir de la séquence dégradée observée. Nous avons proposé une nouvelle méthode basée sur la formulation d'un problème non-convexe, résolu par un algorithme itératif qui alterne entre l'estimation de la séquence originale et l'identification du noyau. Notre méthode a la particularité de pouvoir intégrer divers types de fonctions de régularisations avec des propriétés mathématiques différentes. Nous avons réalisé des simulations sur des séquences synthétiques et réelles, avec différents noyaux de convolution. La flexibilité de notre approche nous a permis de réaliser des comparaisons entre plusieurs fonctions de régularisation convexes et non-convexes, en terme de qualité d'estimation / The last century has witnessed an explosion in the amount of video data stored with holders such as the National Audiovisual Institute whose mission is to preserve and promote the content of French broadcast programs. The cultural impact of these records, their value is increased due to commercial reexploitation through recent visual media. However, the perceived quality of the old data fails to satisfy the current public demand. The purpose of this thesis is to propose new methods for restoring video sequences supplied from television archive documents, using modern optimization techniques with proven convergence properties. In a large number of restoration issues, the underlying optimization problem is made up with several functions which might be convex and non-necessarily smooth. In such instance, the proximity operator, a fundamental concept in convex analysis, appears as the most appropriate tool. These functions may also involve arbitrary linear operators that need to be inverted in a number of optimization algorithms. In this spirit, we developed a new primal-dual algorithm for computing non-explicit proximity operators based on forward-backward iterations. The proposed algorithm is accelerated thanks to the introduction of a preconditioning strategy and a block-coordinate approach in which at each iteration, only a "block" of data is selected and processed according to a quasi-cyclic rule. This approach is well suited to large-scale problems since it reduces the memory requirements and accelerates the convergence speed, as illustrated by some experiments in deconvolution and deinterlacing of video sequences. Afterwards, a close attention is paid to the study of distributed algorithms on both theoretical and practical viewpoints. We proposed an asynchronous extension of the dual forward-backward algorithm, that can be efficiently implemented on a multi-cores architecture. In our distributed scheme, the primal and dual variables are considered as private and spread over multiple computing units, that operate independently one from another. Nevertheless, communication between these units following a predefined strategy is required in order to ensure the convergence toward a consensus solution. We also address in this thesis the problem of blind video deconvolution that consists in inferring from an input degraded video sequence, both the blur filter and a sharp video sequence. Hence, a solution can be reached by resorting to nonconvex optimization methods that estimate alternatively the unknown video and the unknown kernel. In this context, we proposed a new blind deconvolution method that allows us to implement numerous convex and nonconvex regularization strategies, which are widely employed in signal and image processing
77

Developing A Model To Increase Quality Of Dem

Pasaogullari, Onur 01 February 2013 (has links) (PDF)
Low resolution (LR) Grid Digital Elevation Models (DEMs) are the inputs of multi frame super resolution (MFSR) algorithm to obtain high resolution (HR) grid DEM. In digital image MFSR, non-redundant information carrying LR image pairs are a necessity. By using the analogy between digital image and grid DEMs, it is proven that, although the LR grid DEMs have a single source, they carry non-redundant information and they can be inputs of MFSR. Quality of grid DEM can be increased by using MFSR techniques. The level of spatial enhancement is directly related to the amount of non-redundant information that the LR grid DEM pairs carry. It is seen that super resolution techniques have potential to increase the accuracy of grid DEMs from a limited sampling.
78

Valid motion estimation for super-resolution image reconstruction

Santoro, Michael 14 August 2012 (has links)
In this thesis, a block-based motion estimation algorithm suitable for Super-Resolution (SR) image reconstruction is introduced. The motion estimation problem is formulated as an energy minimization problem that consists of both a data and regularization term. To handle cases when motion estimation fails, a block-based validity method is introduced, and is shown to outperform all other validity methods in the literature in terms of hybrid de-interlacing. By combining the validity metric into the energy minimization framework, it is shown that 1) the motion vector error is made less sensitive to block size, 2) a more uniform distribution of motion-compensated blocks results, and 3) the overall motion vector error is reduced. The final motion estimation algorithm is shown to outperform several state-of-the-art motion estimation algorithms in terms of both endpoint error and interpolation error, and is one of the fastest algorithms in the Middlebury benchmark. With the new motion estimation algorithm and validity metric, it is shown that artifacts are virtually eliminated from the POCS-based reconstruction of the high-resolution image.
79

An Iterative MPEG Super-Resolution with an Outer Approximation of Framewise Quantization Constraint

SAKANIWA, Kohichi, YAMADA, Isao, ONO, Toshiyuki, HASEGAWA, Hiroshi 01 September 2005 (has links)
No description available.
80

An Edge-Preserving Super-Precision for Simultaneous Enhancement of Spacial and Grayscale Resolutions

SAKANIWA, Kohichi, YAMADA, Isao, OHTSUKA, Toshinori, HASEGAWA, Hiroshi 01 February 2008 (has links)
No description available.

Page generated in 0.4209 seconds