• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 27
  • 10
  • 10
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 306
  • 306
  • 103
  • 92
  • 73
  • 55
  • 46
  • 45
  • 42
  • 40
  • 39
  • 31
  • 30
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Appearance Modelling for 4D Representations / Modélisation de l'apparence des représentations 4D

Tsiminaki, Vagia 14 December 2016 (has links)
Ces dernières années ont vu l'émergence de la capture des modèles spatio-temporels (modélisation 4D) à partir d'images réelles, avec de nombreuses applications dans les domaines de post-production pour le cinéma, la science des sports, les études sociales, le divertissement, l'industrie de la publicité. A partir de plusieurs séquences vidéos, enregistrées à partir de points de vue variés, la modélisation 4D à partir de vidéos utilise des modèles spatio-temporels pour extraire des informations sur la géométrie et l'apparence de scènes réelles, permettant de les enregistrer et de les reproduire. Cette thèse traite du problème de la modélisation d'apparence.La disponibilité des donnée d'images offre de grands potentiels pour les reconstructions haute fidélité, mais nécessite des méthodes plus élaborées. En outre, les applications du monde réel nécessitent des rendus rapides et des flux réduits de données. Mais l'obtention de représentations d'apparence compactes, indépendantes du point de vue, et à grande résolution est toujours un problème ouvert.Pour obtenir ces caractéristiques, nous exprimons l'information visuelle de l'objet capturé dans un espace de texture commun. Les observations multi-caméra sont considérées comme des réalisations de l'apparence commune et un modèle linéaire est introduit pour matérialiser cette relation. Le modèle linéaire d'apparence proposé permet une première étude du problème de l'estimation d'apparence dans le cas multi-vue et expose les sources variées de bruit et les limitations intrinsèques du modèle.Basé sur ces observations, et afin d'exploiter l'information visuelle de la manière la plus efficace, nous améliorons la méthode en y intégrant un modèle de super-résolution 2D. Le modèle simule le procédé de capture d'image avec une concaténation d'opérations linéaires, générant les observation d'image des différents points de vue et permettant d'exploiter la redondance. Le problème de super-résolution multi-vue résultant est résolu par inférence bayésienne et une représentation haute-résolution d'apparence est fournie permettant de reproduire la texture de l'objet capturé avec grand détail.La composante temporelle est intégrée par la suite au modèle pour permettre d'y recouper l'information visuelle commune sous-jacente. En considérant des petits intervalles de temps ou l'apparence de l'objet ne change pas drastiquement, une représentation super-résolue cohérente temporellement est introduite. Elle explique l'ensemble des images de l'objet capturé dans cet intervalle. Grâce à l'inférence statistique Bayésienne, l'apparence construite permet des rendus avec une grande précision à partir de point de vue nouveau et à des instants différent dans l'intervalle de temps prédéfini.Pour améliorer l'estimation d'apparence d'avantage, l'inter-dépendance de la géométrie et de la photométrie est étudiée et exploitée. Les modélisations de la géométrie et de l'apparence sont unifiées dans le framework de super-résolution permettant une amélioration géométrique globale, ce qui donne à son tour une amélioration importante de l'apparence.Finalement pour encoder la variabilité de l'apparence dynamique des objets subissant plusieurs mouvements, une représentation indépendante du point de vue s'appuyant sur l'analyse en composantes principales est introduite. Cette représentation décompose la variabilité sous-jacente d'apparence en texture propres et déformations propres. La méthode proposée permet de reproduire les apparences de manière précise avec des représentation compactes. Il permet également l'interpolation et la complétion des apparences.Cette étude montre que la représentation compacte, indépendante du point de vue, et super-résolue proposée permet de confronter les nouvelles réalités du problème de modélisation d'apparence. Elle représente un contribution vers des représentations d'apparence 4D haute-qualité et ouvre de nouvelles directions de recherche dans ce domaine. / Capturing spatio-temporal models (4D modelling) from real world imagery has received a growing interest during the last years urged by the increasing demands of real-world applications and the tremendous amount of easily accessible image data. The general objective is to produce realistic representations of the world from captured video sequences. Although geometric modelling has already reached a high level of maturity, the appearance aspect has not been fully explored. The current thesis addresses the problem of appearance modelling for realistic spatio-temporal representations. We propose a view-independent, high resolution appearance representation that successfully encodes the high visual variability of objects under various movements.First, we introduce a common appearance space to express all the available visual information from the captured images. In this space we define the representation of the global appearance of the subject. We then introduce a linear image formation model to simulate the capturing process and to express the multi-camera observations as different realizations of the common appearance. Identifying that the principle of Super-Resolution technique governs also our multi-view scenario, we extend the image generative model to accommodate it. In our work, we use Bayesian inference to solve for the super-resolved common appearance.Second, we propose a temporally coherent appearance representation. We extend the image formation model to generateimages of the subject captured in a small time interval. Our starting point is the observation thatthe appearance of the subject does not change dramatically in a predefined small time interval and the visual information from each view and each frame corresponds to the same appearance representation.We use Bayesian inference to exploit the visual redundant as well as the hidden non-redundant information across time, in order to obtain an appearance representation with fine details.Third, we leverage the interdependency of geometry and photometry and use it toestimate appearance and geometry in a joint manner. We show that by jointly estimating both, we are able to enhance the geometry globally that in turn leads to a significant appearance improvement.Finally, to further encode the dynamic appearance variability of objects that undergo several movements, we cast the appearance modelling as a dimensionality reduction problem. We propose a view-independent representation which builds on PCA and decomposesthe underlying appearance variability into Eigen textures and Eigen warps. The proposed framework is shown to accurately reproduce appearances with compact representations and to resolve appearance interpolation and completion tasks.
232

Mécanismes moléculaires d’activation des intégrines par la kindline-2 lors de l’adhésion cellulaire / Molecular mechanisms of integrin activation by kindlin-2 during cell adhesion

Orré, Thomas 29 November 2017 (has links)
Les adhérences focales (AF), structures adhésives reliant la cellule à la matrice extra-cellulaire (MEC), constituent de véritables plateformes de signalisation biochimique et mécanique qui contrôlent l'adhérence, la migration, la différenciation et la survie cellulaire. Les récepteurs transmembranaires intégrines sont au coeur des AF, où elles connectent la MEC au cytosquelette d'actine. Au début des années 2000, la protéine intracellulaire taline, qui se lie aux parties cytoplasmiques bêta des intégrines, était considérée comme le principal activateur des intégrines. Néanmoins, il a depuis été montré que la kindline, autre protéine intracellulaire se liant aux parties bêta cytoplasmiques, jouait également un rôle essentiel dans l'activation des intégrines. Ainsi,plusieurs études ont mis en évidence que la kindline et la taline étaient complémentaires et avaient une action synergique durant l'activation des intégrines. Les bases moléculaires de ces phénomènes restent à déterminer. De plus, la plupart des données sur lerôle de la kindline dans l'adhérence et l'activation des intégrines provient d'expériences menées sur des cellules en suspension et/ou avec l'intégrine plaquettaire αIIbβ3. Ainsi, la régulation de ces processus par la kindline dans les cellules adhérentes est encore peu comprise. Dans cette étude, nous combinons la microscopie PALM et le suivi de protéines individuelles pour révéler le rôle et le comportement de la kindline à l'intérieur et à l'extérieur des AF au cours des événements moléculaires clés se déroulant au niveau de la membrane plasmique, et qui mènent à l'activation des intégrines. Nous avons observé que les intégrines bêta1 etbêta3 portant une mutation ponctuelle inhibant l'interaction avec la kindline montrent un défaut d'immobilisation dans les AF. Nous avons également observé que la kindline-2, qui est enrichie dans les AF, diffusait librement au niveau de la membrane plasmique,à l'intérieur et à l'extérieur des AF. Ceci constitue une distinction majeure par rapport à la taline, qui, au niveau de la membrane plasmique, est essentiellement observée dans les AF où elle est immobile, montrant qu'elle est recrutée dans les AF directement depuis le cytosol sans diffusion latérale membranaire (Rossier et al. 2012). Afin d'identifier les bases moléculaires du recrutement et de la diffusion membranaire de la kindline, nous avons utilisé différents variants mutés de kindline précédemment décrits. Le mutant kindline-2-QW614/615AA (liaison aux intégrines inhibée) montre une diffusion membranaire accrue, ce qui suggère que la kindline peut diffuser au niveau de la membrane plasmique sans être associée aux intégrines. Par ailleurs, la baisse d'immobilisation au niveau des AF observée avec ce mutant montre qu'une partie de l'immobilisation de la kindline est due aux intégrines, suggérant l'existence d'un complexe intégrine-kindline immobile dans les AF. La délétion du domaine PleckstrinHomology (PH) de la kindline diminue considérablement son recrutement et sa diffusion membranaire. Nous avons évalué le rôle fonctionnel du recrutement et de la diffusion membranaire de la kindline en réexprimant ces mutants dans des cellules déplétéesen kindline-1 et -2 (cellules KO kindline-1 -/-, kindline-2 -/-). Ces expériences montrent que le recrutement et la diffusion membranaire de la kindline sont cruciaux pour l'activation des intégrines durant l'étalement cellulaire et favorisent la formation d’adhérences. Cela suggère que la kindline utilise un chemin différent de celui de la taline pour atteindre et activer les intégrines,ce qui pourrait expliquer au niveau moléculaire comment la kindline complémente la taline durant l'activation des intégrines. / Focal adhesions (FAs) are adhesive structures linking the cell to the extracellular matrix (ECM) and constitute molecular platforms for biochemical and mechanical signals controlling cell adhesion, migration, differentiation and survival. Integrin transmembrane receptors are core components of FAs, connecting the ECM to the actin cytoskeleton. During the early 2000s, the intracellular protein talin, which directly binds to the cytoplasmic tail of β-integrins, was considered as the main integrin activator. Nevertheless, it has been shown that kindlin, another intracellular protein that bind to β-integrin, is also a critical integrin activator. In fact, several studies have shown that kindlin and talin play complementary and synergistic roles during integrin activation. The molecular basis of these phenomena remains to determine. Moreover, most studies focusing on the role of kindlin during integrin activation and cell adhesion have been performed with suspended cells and/or with the platelet integrin αIIbβ3. Here we combined PALM microscopy with single protein tracking to decipher the role and behavior of kindlin during key molecular events occurring outside and inside FAs at the plasma membrane and leading to integrin activation, as we have done previously for talin (Rossier et al., 2012). We found that beta1 and beta3-integrins with a point mutation inhibiting binding to kindlin show reduced immobilization inside FAs. We also found that kindlin-2, which is enriched inside FAs, displayed free diffusion at the plasma membrane outside and inside FAs. This constitutes a major difference with talin, which, at the plasma membrane level, is observed almost exclusively in FAs, where it is immobile, which shows that talin is recruited into FAs directly from the cytosol without lateral diffusion along the plasma membrane (Rossier et al. 2012). To determine the molecular basis of kindlin membrane recruitment and diffusion, we used a kindlin variant known to decrease binding to integrins (kindlin-2- QW614/615AA). This mutant displayed increased membrane diffusion, suggesting that kindlin-2 can freely diffuse at the plasma membrane without interacting with integrins. Moreover, the kindlin-2-QW mutant showed decreased immobilization inside FA, showing that part of kindlin immobilization depends on interaction with integrins. This suggests that kindlin can form an immobile complex with integrins inside focal adhesions. Deletion of the kindlin pleckstrin homology (PH) domain strongly reduced the membrane recruitment and diffusion of kindlin. We assessed the functional role of kindlin membrane recruitment and diffusion by re-expressing different kindlin-2 mutants in kindlin-1/kindlin-2 double KO cells. Those experiments demonstrated that kindlin-2 membrane recruitment and diffusion are crucial for integrin activation during cell spreading and favor adhesion formation. This suggests that kindlin uses a different route from talin to reach integrins and trigger their activation, providing a possible molecular basis for their complementarity during integrin activation.
233

Sub-Nyquist Sampling and Super-Resolution Imaging

Mulleti, Satish January 2017 (has links) (PDF)
The Shannon sampling framework is widely used for discrete representation of analog bandlimited signals, starting from samples taken at the Nyquist rate. In many practical applications, signals are not bandlimited. In order to accommodate such signals within the Shannon-Nyquist framework, one typically passes the signal through an anti-aliasing filter, which essentially performs bandlimiting. In applications such as RADAR, SONAR, ultrasound imaging, optical coherence to-mography, multiband signal communication, wideband spectrum sensing, etc., the signals to be sampled have a certain structure, which could manifest in one of the following forms: (i) sparsity or parsimony in a certain bases; (ii) shift-invariant representation; (iii) multi-band spectrum; (iv) finite rate of innovation property, etc.. By using such structure as a prior, one could devise efficient sampling strategies that operate at sub-Nyquist rates. In this Ph.D. thesis, we consider the problem of sampling and reconstruction of finite-rate-of-innovation (FRI) signals, which fall in one of the two classes: (i) Sum-of-weighted and time-shifted (SWTS) pulses; and (ii) Sum-of-weighted exponential (SWE). Finite-rate-of-innovation signals are not necessarily bandlimited, but they are specified by a finite number of free parameters per unit time interval. Hence, the FRI reconstruction problem could be solved by estimating the parameters starting from measurements on the signal. Typically, parameter estimation is done using high-resolution spectral estimation (HRSE) techniques such as the annihilating filter, matrix pencil method, estimation of signal parameter via rotational invariance technique (ESPRIT), etc.. The sampling issues include design of the sampling kernel and choice of the sampling grid structure. Following a frequency-domain reconstruction approach, we propose a novel technique to design compactly supported sampling kernels. The key idea is to cancel aliasing at certain set of uniformly spaced frequencies and make sure that the rest of the frequency response is specified such that the kernel follows the Paley-Wiener criterion for compactly supported functions. To assess the robustness in the presence of noise, we consider a particular class of the proposed kernel whose impulse response has the form of sum of modulated splines (SMS). In the presence of continuous-time and digital noise cases, we show that the reconstruction accuracy is improved by 5 to 25 dB by using the SMS kernel compared with the state-of-the-art compactly supported kernels. Apart from noise robustness, the SMS kernel also has polynomial-exponential reproducing property where the exponents are harmonically related. An interesting feature of the SMS kernel, in contrast with E-splines, is that its support is independent of the number of exponentials. In a typical SWTS signal reconstruction mechanism, first, the SWTS signal is trans formed to a SWE signal followed by uniform sampling, and then discrete-domain annihilation is applied for parameter estimation. In this thesis, we develop a continuous-time annihilation approach using the shift operator for estimating the parameters of SWE signals. Instead of using uniform sampling-based HRSE techniques, operator-based annihilation allows us to estimate parameters from structured non-uniform samples (SNS), and gives more accurate parameters estimates. On the application front, we first consider the problem of curve fitting and curve completion, specifically, ellipse fitting to uniform or non-uniform samples. In general, the ellipse fitting problem is solved by minimizing distance metrics such as the algebraic distance, geometric distance, etc.. It is known that when the samples are measured from an incomplete ellipse, such fitting techniques tend to estimate biased ellipse parameters and the estimated ellipses are relatively smaller than the ground truth. By taking into account the FRI property of an ellipse, we show how accurate ellipse fitting can be performed even to data measured from a partial ellipse. Our fitting technique first estimates the underlying sampling rate using annihilating filter and then carries out least-squares regression to estimate the ellipse parameters. The estimated ellipses have lesser bias compared with the state-of-the-art methods and the mean-squared error is lesser by about 2 to 10 dB. We show applications of ellipse fitting in iris images starting from partial edge contours. We found that the proposed method is able to localize iris/pupil more accurately compared with conventional methods. In a related application, we demonstrate curve completion to partial ellipses drawn on a touch-screen tablet. We also applied the FRI principle to imaging applications such as frequency-domain optical-coherence tomography (FDOCT) and nuclear magnetic resonance (NMR) spectroscopy. In these applications, the resolution is limited by the uncertainty principle, which, in turn, is limited by the number of measurements. By establishing the FRI property of the measurements, we show that one could attain super-resolved tomograms and NMR spectra by using the same or lesser number of samples compared with the classical Fourier-based techniques. In the case of FDOCT, by assuming a piecewise-constant refractive index of the specimen, we show that the measurements have SWE form. We show how super-resolved tomograms could be achieved using SNS-based reconstruction technique. To demonstrate clinical relevance, we consider FDOCT measurements obtained from the retinal pigment epithelium (RPE) and photoreceptor inner/outer segments (IS/OS) of the retina. We show that the proposed method is able to resolve the RPE and IS/OS layers by using only 40% of the available samples. In the context of NMR spectroscopy, the measured signal or free induction decay (FID) can be modelled as a SWE signal. Due to the exponential decay, the FIDs are non-stationary. Hence, one cannot directly apply autocorrelation-based methods such as ESPRIT. We develop DEESPRIT, a counterpart of ESPRIT for decaying exponentials. We consider FID measurements taken from amino acid mixture and show that the proposed method is able to resolve two closely spaced frequencies by using only 40% of the measurements. In summary, this thesis focuses on various aspects of sub-Nyquist sampling and demonstrates concrete applications to super-resolution imaging.
234

2D and 3D optical nanoscopy of single molecules at cryogenic temperatures / Nanoscopie optique 2D et 3D de molécules uniques à températures cryogéniques

Baby, Reenu 17 July 2018 (has links)
Dans cette thèse, nous présentons le développement d'une méthode de nanoscopie optique superrésolue en trois dimensions pour résoudre des émetteurs quantiques uniques à température cryogénique. Cette méthode, appelée microscopie à saturation d'état excité (ESSat), est une technique d'imagerie confocale à balayage basée sur la saturation optique de la raie sans phonon de l'émetteur. Elle utilise un faisceau d’illumination structurée comprenant une zone d'intensité nulle au foyer de l'objectif de microscope, avec un gradient d'intensité important autour. En imageant des molécules fluorescentes aromatiques individuelles à 2 K, nous avons atteint une résolution de 28 nm dans la direction latérale et 22 nm dans la direction axiale, avec de faibles intensités laser d'environ dix kWcm-2, soit cinq ordres de grandeur inférieures à celles utilisées à température ambiante dans les méthodes de nanoscopie basées sur la déplétion par émission stimulée. Notre technique offre une opportunité unique de super-résoudre des molécules uniques séparées par des distances nanométriques et avec des résonances optiques qui se recouvrent. De plus, la méthode fournit une détermination directe de l'orientation des dipôles moléculaires à partir des images ESSat de fluorescence. La microscopie ESSat ouvre ainsi la voie à des études approfondies des interactions cohérentes dipôle-dipôle optiques entre émetteurs quantiques individuels, qui nécessitent des distances relatives nanométriques. En particulier, cette méthode permettra d'étudier les riches signatures spatiales et fréquentielles du système couplé et de manipuler leur degré d'intrication. / In this thesis, we present the development of a cryogenic super-resolution optical nanoscopy thatcan resolve molecules at nanometric distances, called the Excited State Saturation (ESSat)Microscopy.ESSat microscopy is a scanning confocal imaging technique based on the optical saturation of thezero phonon line of a single fluorescent molecule. It uses a patterned illumination beam thatcontains a ‘zero-intensity’ region at the focus of the microscope objective with a large intensitygradient around. We achieved a sub-10 nm resolution in the lateral direction and 22 nm resolutionin the axial direction with extremely low excitation intensities of few tens of kWcm-2. Comparedto other super-resolution imaging techniques, like STED, RESOLFT, etc., our technique offers aunique opportunity to super-resolve single molecules with overlapping optical resonances and thatare much closer than the diffraction limit. In addition, it is possible to determine the orientation ofmolecular dipoles from the fluorescent ESSat images. Since coherent dipole-dipole couplinginteractions between single quantum emitters have a very high coupling efficiency at short distancemuch smaller than the diffraction limit, it is important to resolve them well below it. ESSatmicroscopy thus paves a way to disclose the rich spatial and frequential signatures of the coupledsystem and to manipulate their degree of entanglement.
235

Organisation spatiale de LFA-1 à la synapse immunologique des lymphocytes T cytotoxiques : approches de microscopie de super-résolution / Spatial organization of LFA-1 at the immunological synapse of citotoxic T lymphocytes : super-resolution microscopy approaches

Houmadi, Raïssa 04 October 2017 (has links)
LFA-1 (Lymphocyte Function Associated antigen-1) est une intégrine centrale dans la fonction cytotoxique des lymphocytes T CD8+ car elle permet la formation de la synapse immunologique avec les cellules cibles. La régulation de cette interaction cellulaire est contrôlée par la qualité de l'engagement de LFA-1 avec son ligand ICAM-1 (Intracellular Adhesion Molecule-1). Un support clef au contrôle spatio-temporel de l'activation de LFA-1 est le cytosquelette d'actine corticale dans lequel est ancré LFA-1 par son domaine intracellulaire. Comment LFA-1 est organisée à la synapse immunologique et comment la coordination entre LFA-1 et cytosquelette d'actine s'opère de manière précise au sein des lymphocytes T CD8+ cytotoxiques sont des questions non résolues. Le but de ce projet de thèse a été d'étudier l'organisation précise de la distribution de LFA-1 à la synapse immunologique en relation avec l'actine corticale sous-jacente au contact entre lymphocytes T cytotoxiques et les cellules présentatrices d'antigènes. Pour ce faire, des approches de microscopies de super-résolution SIM (Structured Illumination Microscopy), dSTORM (direct STochastic Optical Reconstruction Microscopy) et TIRF (Total Internal Reflexion Fluorescence microscopy) ont été développées. Elles ont été appliquées à des lymphocytes T humains non transformés dérivés de contrôles sains et de patients atteints d'une immunodéficience congénitale, le Syndrome de Wiskott-Aldrich (WAS), caractérisé par un défaut de remodelage du cytosquelette d'actine à la synapse immunologique. L'emploi de l'approche de dSTORM en mode TIRF nous a permis de révéler que dans sa conformation activée, LFA-1 forme à la synapse une ceinture radiale composée de centaines de nano-clusters. L'intégrité du cytosquelette d'actine et notamment la protéine WASP s'avèrent importantes pour la formation de la ceinture de nano-clusters de LFA-1, comme le montre le défaut de formation de cette ceinture dans les lymphocytes de patients WAS. L'approche de SIM multi-couleur nous a permis de révéler le rôle de la ceinture de LFA-1 dans le confinement des granules lytiques. Par comparaison de marquages avec des anticorps spécifiques de différentes conformations de LFA-1, notre travail montre également que l'activation de LFA-1 s'opère de manière digitale, dans le sens où les nano-clusters fonctionnent comme des unités au sein desquelles l'activation de LFA-1 suit une loi du tout ou rien. En conclusion, ce travail de thèse démontre l'intérêt des approches de microscopie de super-résolution pour révéler des mécanismes clefs de l'activation des lymphocytes T et pour appréhender la nature des défauts à l'origine de dérèglements pathologiques de la fonction de ces cellules. / LFA-1 (Lymphocyte Function Associated antigen-1) is a central integrin in the function of cytotoxic CD8+ T lymphocytes since it allows the formation of the immunological synapse with target cells. The regulation of this cellular interaction is controlled by the quality of the engagement of LFA-1 with its ligand, ICAM-1 (Intracellular Adhesion Molecule- 1). A key support for the spatio-temporal control of LFA-1 activation is the cortical actin cytoskeleton in which LFA-1 is anchored by its intracellular domain. How LFA-1 is organized at the immunological synapse and how the coordination between LFA-1 and actin cytoskeleton operates accurately within cytotoxic CD8+ T lymphocytes are unresolved issues. The aim of this thesis project was to study the precise organization of the LFA-1 distribution at the immunological synapse in relation to the cortical actin underlying the contact between cytotoxic T lymphocytes and target cells. For this purpose, super-resolution microscopy approaches, including SIM (Structured Illumination Microscopy), dSTORM (direct STochastic Optical Reconstruction Microscopy) and TIRF (Total Internal Reflection Fluorescence microscopy) were developed. They were applied to untransformed human T lymphocytes derived from healthy donors and patients with a congenital immunodeficiency, the Wiskott-Aldrich Syndrome (WAS), characterized by actin cytoskeleton remodeling defects at the synapse. The use of the dSTORM approach revealed that activated LFA-1 forms a radial belt composed of hundreds of nanoclusters. The assembly of this belt depends on the integrity of the actin cytoskeleton, as shown by the impairment of this structure in the T lymphocytes derived from the WAS patients. The multi-color SIM approach allowed us to investigate the role of the LFA-1 belt in the confinement of lytic granules. Furthermore, the combination of staining with antibodies specific of LFA-1 conformation states shows that LFA-1 activation is a digital process, whereby nanoclusters operate as units in which LFA-1 activation follows an on / off rule. In conclusion, this PhD work exemplifies the great asset of super-resolution microscopy approaches to reveal key activation mechanisms in T lymphocytes and explore the nature of the defects causing pathological dysregulation of the function of these cells.
236

The enigma of imaging in the Maxwell fisheye medium

Sahebdivan, Sahar January 2016 (has links)
The resolution of optical instruments is normally limited by the wave nature of light. Circumventing this limit, known as the diffraction limit of imaging, is of tremendous practical importance for modern science and technology. One method, super-resolved fluorescence microscopy was distinguished with the Nobel Prize in Chemistry in 2014, but there is plenty of room for alternatives and complementary methods such as the pioneering work of Prof. J. Pendry on the perfect lens based on negative refraction that started the entire research area of metamaterials. In this thesis, we have used analytical techniques to solve several important challenges that have risen in the discussion of the microwave experimental demonstration of absolute optical instruments and the controversy surrounding perfect imaging. Attempts to overcome or circumvent Abbe's diffraction limit of optical imaging, have traditionally been greeted with controversy. In this thesis, we have investigated the role of interacting sources and detectors in perfect imaging. We have established limitations and prospects that arise from interactions and resonances inside the lens. The crucial role of detection becomes clear in Feynman's argument against the diffraction limit: “as Maxwell's electromagnetism is invariant upon time reversal, the electromagnetic wave emitted from a point source may be reversed and focused into a point with point-like precision, not limited by diffraction.” However, for this, the entire emission process must be reversed, including the source: A point drain must sit at the focal position, in place of the point source, otherwise, without getting absorbed at the detector, the focused wave will rebound and the superposition of the focusing and the rebounding wave will produce a diffraction-limited spot. The time-reversed source, the drain, is the detector which taking the image of the source. In 2011-2012, experiments with microwaves have confirmed the role of detection in perfect focusing. The emitted radiation was actively time-reversed and focused back at the point of emission, where, the time-reversed of the source sits. Absorption in the drain localizes the radiation with a precision much better than the diffraction limit. Absolute optical instruments may perform the time reversal of the field with perfectly passive materials and send the reversed wave to a different spatial position than the source. Perfect imaging with absolute optical instruments is defected by a restriction: so far it has only worked for a single–source single–drain configuration and near the resonance frequencies of the device. In chapters 6 and 7 of the thesis, we have investigated the imaging properties of mutually interacting detectors. We found that an array of detectors can image a point source with arbitrary precision. However, for this, the radiation has to be at resonance. Our analysis has become possible thanks to a theoretical model for mutually interacting sources and drains we developed after considerable work and several failed attempts. Modelling such sources and drains analytically had been a major unsolved problem, full numerical simulations have been difficult due to the large difference in the scales involved (the field localization near the sources and drains versus the wave propagation in the device). In our opinion, nobody was able to reproduce reliably the experiments, because of the numerical complexity involved. Our analytic theory draws from a simple, 1–dimensional model we developed in collaboration with Tomas Tyc (Masaryk University) and Alex Kogan (Weizmann Institute). This model was the first to explain the data of experiment, characteristic dips of the transmission of displaced drains, which establishes the grounds for the realistic super-resolution of absolute optical instruments. As the next step in Chapter 7 we developed a Lagrangian theory that agrees with the simple and successful model in 1–dimension. Inspired by the Lagrangian of the electromagnetic field interacting with a current, we have constructed a Lagrangian that has the advantage of being extendable to higher dimensions in our case two where imaging takes place. Our Lagrangian theory represents a device-independent, idealized model independent of numerical simulations. To conclude, Feynman objected to Abbe's diffraction limit, arguing that as Maxwell's electromagnetism is time-reversal invariant, the radiation from a point source may very well become focused in a point drain. Absolute optical instruments such as the Maxwell Fisheye can perform the time reversal and may image with a perfect resolution. However, the sources and drains in previous experiments were interacting with each other as if Feynman's drain would act back to the source in the past. Different ways of detection might circumvent this feature. The mutual interaction of sources and drains does ruin some of the promising features of perfect imaging. Arrays of sources are not necessarily resolved with arrays of detectors, but it also opens interesting new prospects in scanning near-fields from far–field distances. To summarise the novel idea of the thesis: • We have discovered and understood the problems with the initial experimental demonstration of the Maxwell Fisheye. • We have solved a long-standing challenge of modelling the theory for mutually interacting sources and drains. • We understand the imaging properties of the Maxwell Fisheye in the wave regime. Let us add one final thought. It has taken the scientific community a long time of investigation and discussion to understand the different ingredients of the diffraction limit. Abbe's limit was initially attributed to the optical device only. But, rather all three processes of imaging, namely illumination, transfer and detection, make an equal contribution to the total diffraction limit. Therefore, we think that for violating the diffraction limit one needs to consider all three factors together. Of course, one might circumvent the limit and achieve a better resolution by focusing on one factor, but that does not necessary imply the violation of a fundamental limit. One example is STED microscopy that focuses on the illumination, another near–field scanning microscopy that circumvents the diffraction limit by focusing on detection. Other methods and strategies in sub-wavelength imaging –negative refraction, time reversal imaging and on the case and absolute optical instruments –are concentrating on the faithful transfer of the optical information. In our opinion, the most significant, and naturally the most controversial, part of our findings in the course of this study was elucidating the role of detection. Maxwell's Fisheye transmits the optical information faithfully, but this is not enough. To have a faithful image, it is also necessary to extract the information at the destination. In our last two papers, we report our new findings of the contribution of detection. We find out in the absolute optical instruments, such as the Maxwell Fisheye, embedded sources and detectors are not independent. They are mutually interacting, and this interaction influences the imaging property of the system.
237

Modélisation de textures anisotropes par la transformée en ondelettes monogéniques / Modelisation of anisotropic textures by the monogenic wavelet transform

Polisano, Kévin 12 December 2017 (has links)
L’analyse de texture est une composante du traitement d’image qui suscite beaucoup d’intérêt tant les applications qu’elle recouvre sont diverses. En imagerie médicale, les signaux enregistrés sous forme d’images telles que les radiographies de l’os ou les mammographies, présentent une micro-architecture fortement irrégulière qui invite à considérer la formation de ces textures comme la réalisation d’un champ aléatoire. Suite aux travaux précurseurs de Benoit Mandelbrot, de nombreux modèles dérivés du champ brownien fractionnaire ont été proposés pour caractériser l’aspect fractal des images et synthétiser des textures à rugosité prescrite. Ainsi l’estimation des paramètres de ces modèles, a notamment permis de relier la dimension fractale des images à la détection de modifications de la structure osseuse telle qu’on l’observe en cas d’ostéoporose. Plus récemment, d’autres modèles de champs aléatoires, dits anisotropes, ont été utilisés pour décrire des phénomènes présentant des directions privilégiées, et détecter par exemple des anomalies dans les tissus mammaires.Cette thèse porte sur l’élaboration de nouveaux modèles de champs anisotropes, permettant de contrôler localement l’anisotropie des textures. Une première contribution a consisté à définir un champ brownien fractionnaire anisotrope généralisé (GAFBF), et un second modèle basé sur une déformation de champs élémentaires (WAFBF), permettant tous deux de prescrire l’orientation locale de la texture. L’étude de la structure locale de ces champs est menée à l’aide du formalisme des champs tangents. Des procédures de simulation sont mises en oeuvres pour en observer concrètement le comportement, et servir de benchmark à la validation d’outils de détection de l’anisotropie. En effet l’étude de l’orientation locale et de l’anisotropie dans le cadre des textures soulève encore de nombreux problèmes mathématiques, à commencer par la définition rigoureuse de cette orientation. Notre seconde contribution s’inscrit dans cette perspective. En transposant les méthodes de détection de l’orientation basées sur la transformée en ondelettes monogéniques, nous avons été en mesure, pour une vaste classe de champs aléatoires, de définir une notion d’orientation intrinsèque. En particulier l’étude des deux nouveaux modèles de champs anisotropes introduits précédemment, a permis de relier formellement cette notion d’orientation aux paramètres d’anisotropie de ces modèles. Des connexions avec les statistiques directionnelles sont également établies, de façon à caractériser la loi de probabilité des estimateurs d’orientation.Enfin une troisième partie de la thèse est consacrée au problème de la détection de lignes dans les images. Le modèle sous jacent est celui d’une superposition de lignes diffractées (c-a-d convoluées par un noyau de flou) et bruitées, dont il s’agit de retrouver les paramètres de position et d’intensité avec une précision sub-pixel. Nous avons développé dans cet objectif une méthode basée sur le paradigme de la super-résolution. La reformulation du problème en termes d’atomes 1-D a permis de dégager un problème d’optimisation sous contraintes, et de reconstruire ces lignes en atteignant cette précision. Les algorithmes employés pour effectuer la minimisation appartiennent à la famille des algorithmes dits proximaux. La formalisation de ce problème inverse et sa résolution, constituent une preuve de concept ouvrant des perspectives à l’élaboration d’une transformée de Hough revisitée pour la détection ‘continue’ de lignes dans les images. / Texture analysis is a component of image processing which hold the interest in the various applications it covers. In medical imaging, the images recorded such as bone X-rays or mammograms show a highly irregular micro-architecture, which invites to consider these textures formation as a realization of a random field. Following Benoit Mandelbrot’s pioneer work, many models derived from the fractional Brownian field have been proposed to characterize the fractal behavior of images and to synthesize textures with prescribed roughness. Thus, the parameters estimation of these models has made possible to link the fractal dimension of these images to the detection of bone structure alteration as it is observed in the case of osteoporosis. More recently, other models known as anisotropic random fields have been used to describe phenomena with preferred directions, for example for detecting abnormalities in the mammary tissues.This thesis deals with the development of new models of anisotropic fields, allowing to locally control the anisotropy of the textures. A first contribution was to define a generalized anisotropic fractional Brownian field (GAFBF), and a second model based on an elementary field deformation (WAFBF), both allowing to prescribe the local orientation of the texture. The study of the local structure of these fields is carried out using the tangent fields formalism. Simulation procedures are implemented to concretely observe the behavior, and serve as a benchmark for the validation of anisotropy detection tools. Indeed, the investigation of local orientation and anisotropy in the context of textures still raises many mathematical problems, starting with the rigorous definition of this orientation. Our second contribution is in this perspective. By transposing the orientation detection methods based on the monogenic wavelet transform, we have been able, for a wide class of random fields, to define an intrinsic notion of orientation. In particular, the study of the two new models of anisotropic fields introduced previously allowed to formally link this notion of orientation with the anisotropy parameters of these models. Connections with directional statistics are also established, in order to characterize the probability distribution of orientation estimators.Finally, a third part of this thesis was devoted to the problem of the lines detection in images. The underlying model is that of a superposition of diffracted lines (i.e, convoluted by a blur kernel) with presence of noise, whose position and intensity parameters must be recovered with sub-pixel precision. We have developed a method based on the super-resolution paradigm. The reformulation of the problem in the framework of 1-D atoms lead to an optimization problem under constraints, and enables to reconstruct these lines by reaching this precision. The algorithms used to perform the minimization belong to the family of algorithms known as proximal algorithms. The modelization and the resolution of this inverse problem, provides a proof of concept opening perspectives to the development of a revised Hough transform for the continuous detection of lines in images.
238

Výpočetní metody v jednomolekulové lokalizační mikroskopii / Computational methods in single molecule localization microscopy

Ovesný, Martin January 2016 (has links)
Computational methods in single molecule localization microscopy Abstract Fluorescence microscopy is one of the chief tools used in biomedical research as it is a non invasive, non destructive, and highly specific imaging method. Unfortunately, an optical microscope is a diffraction limited system. Maximum achievable spatial resolution is approximately 250 nm laterally and 500 nm axially. Since most of the structures in cells researchers are interested in are smaller than that, increasing resolution is of prime importance. In recent years, several methods for imaging beyond the diffraction barrier have been developed. One of them is single molecule localization microscopy, a powerful method reported to resolve details as small as 5 nm. This approach to fluorescence microscopy is very computationally intensive. Developing methods to analyze single molecule data and to obtain super-resolution images are the topics of this thesis. In localization microscopy, a super-resolution image is reconstructed from a long sequence of conventional images of sparsely distributed single photoswitchable molecules that need to be sys- tematically localized with sub-diffraction precision. We designed, implemented, and experimentally verified a set of methods for automated processing, analysis and visualization of data acquired...
239

Color Fusion and Super-Resolution for Time-of-Flight Cameras

Zins, Matthieu January 2017 (has links)
The recent emergence of time-of-flight cameras has opened up new possibilities in the world of computer vision. These compact sensors, capable of recording the depth of a scene in real-time, are very advantageous in many applications, such as scene or object reconstruction. This thesis first addresses the problem of fusing depth data with color images. A complete process to combine a time-of-flight camera with a color camera is described and its accuracy is evaluated. The results show that a satisfying precision is reached and that the step of calibration is very important. The second part of the work consists of applying super-resolution techniques to the time-of-flight camera in order to improve its low resolution. Different types of super-resolution algorithms exist but this thesis focuses on the combination of multiple shifted depth maps. The proposed framework is made of two steps: registration and reconstruction. Different methods for each step are tested and compared according to the improvements reached in term of level of details, sharpness and noise reduction. The results obtained show that Lucas-Kanade performs the best for the registration and that a non-uniform interpolation gives the best results in term of reconstruction. Finally, a few suggestions are made about future work and extensions for our solutions.
240

Development of experimental and analysis methods to calibrate and validate super-resolution microscopy technologies / Développement de méthodes expérimentales et d'analyse pour calibrer et valider les technologies de microscopie de super-résolution

Salas, Desireé 27 November 2015 (has links)
Les méthodes de microscopie de super-résolution (SRM) telles que la microscopie PALM (photoactivated localization microscopy), STORM (stochastic optical reconstruction microscopy), BALM (binding-activated localization microscopy) et le DNA-PAINT, représentent un nouvel ensemble de techniques de microscopie optique qui permettent de surpasser la limite de diffraction ( > 200 nm dans le spectre visible). Ces méthodes sont basées sur la localisation de la fluorescence de molécules uniques, et peuvent atteindre des résolutions de l'ordre du nanomètres (~20 nm latéralement et 50 nm axialement). Les techniques SRM ont un large spectre d'applications dans les domaines de la biologie et de la biophysique, rendant possible l'accès à l'information tant dynamique que structurale de structures connues ou non, in vivo et in vitro. Beaucoup d'efforts ont été fournis durant la dernière décennie afin d'élargir le potentiel de ces méthodes en développant des méthodes de localisation à la fois plus précise et plus rapide, d'améliorer la photophysique des fluorophores, de développer des algorithmes pour obtenir une information quantitative et augmenter la précision de localisation, etc. Cependant, très peu de méthodes ont été développées pour examiner l'hétérogénéité des images et extraire les informations statistiquement pertinent à partir de plusieurs milliers d'images individuelles super-résolues. Dans mon travail de thèse, je me suis spécifiquement attaquée à ces limitations en: (1) construisant des objets de dimensions nanométriques et de structures bien définies, avec la possibilité d'être adaptés aux besoins. Ces objets sont basés sur les origamis d'ADN. (2) développant des approches de marquage afin d'acquérir des images homogènes de ces objets. (3) implémentant des outils statistiques dans le but d'améliorer l'analyse et la validation d'images. Ces outils se basent sur des méthodes de reconstruction de molécules uniques communément appliquées aux reconstructions d'images de microscopie électronique. J'ai spécifiquement appliqué ces développements à la reconstruction de formes 3D de deux origamis d'ADN modèles (en une et trois dimensions). Je montre comment ces méthodes permettent la dissection de l'hétérogénéité de l'échantillon, et la combinaison d'images similaires afin d'améliorer le rapport signal sur bruit. La combinaison de différentes classes moyennes ont permis la reconstruction des formes tridimensionnelles des origamis d'ADN. Particulièrement, car cette méthode utilise la projection 2D de différentes vues d'une même structure, elle permet la récupération de résolutions isotropes en trois dimensions. Des fonctions spécifiques ont été adaptées à partir de méthodologies existantes afin de quantifier la fiabilité des reconstructions et de leur résolution. A l'avenir, ces développements seront utiles pour la reconstruction 3D de tous types d'objets biologiques pouvant être observés à haute résolution par des méthodologies dérivées de PALM, STORM ou PAINT. / Super resolution microscopy (SRM) methods such as photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), binding-activated localization microscopy (BALM) and DNA-PAINT represent a new collection of light microscopy techniques that allow to overpass the diffraction limit barrier ( > 200 nm in the visible spectrum). These methods are based on the localization of bursts of fluorescence from single fluorophores, and can reach nanometer resolutions (~20 nm in lateral and 50 nm in axial direction, respectively). SRM techniques have a broad spectrum of applications in the field of biology and biophysics, allowing access to structural and dynamical information of known and unknown biological structures in vivo and in vitro. Many efforts have been made over the last decade to increase the potential of these methods by developing more precise and faster localization techniques, to improve fluorophore photophysics, to develop algorithms to obtain quantitative information and increase localization precision, etc. However, very few methods have been developed to dissect image heterogeneity and to extract statistically relevant information from thousands of individual super-resolved images. In my thesis, I specifically tackled these limitations by: (1) constructing objects with nanometer dimensions and well-defined structures with the possibility of be tailored to any need. These objects are based on DNA origami. (2) developing labeling approaches to homogeneously image these objects. These approaches are based on adaptations of BALM and DNA-PAINT microscopies. (3) implemented statistical tools to improve image analysis and validation. These tools are based on single-particle reconstruction methods commonly applied to image reconstruction in electron microscopy.I specifically applied these developments to reconstruct the 3D shape of two model DNA origami (in one and three dimensions). I show how this method permits the dissection of sample heterogeneity, and the combination of similar images in order to improve the signal-to-noise ratio. The combination of different average classes permitted the reconstruction of the three dimensional shape of DNA origami. Notably, because this method uses the 2D projections of different views of the same structure, it permits the recovery of isotropic resolutions in three dimensions. Specific functions were adapted from previous methodologies to quantify the reliability of the reconstructions and their resolution.In future, these developments will be helpful for the 3D reconstruction of any biological object that can be imaged at super resolution by PALM, STORM or PAINT-derived methodologies.

Page generated in 0.1017 seconds