• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 6
  • 5
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 81
  • 40
  • 29
  • 22
  • 18
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 12
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Stimulated emission depletion microscopy with optical fibers

Yan, Lu 10 March 2017 (has links)
Imaging at the nanoscale and/or at remote locations holds great promise for studies in fields as disparate as the life sciences and materials sciences. One such microscopy technique, stimulated emission depletion (STED) microscopy, is one of several fluorescence based imaging techniques that offers resolution beyond the diffraction-limit. All current implementations of STED microscopy, however, involve the use of free-space beam shaping devices to achieve the Gaussian- and donut-shaped Orbital Angular Momentum (OAM) carrying beams at the desired colors –-- a challenging prospect from the standpoint of device assembly and mechanical stability during operation. A fiber-based solution could address these engineering challenges, and perhaps more interestingly, it may facilitate endoscopic implementation of in vivo STED imaging, a prospect that has thus far not been realized because optical fibers were previously considered to be incapable of transmitting the OAM beams that are necessary for STED. In this thesis, we investigate fiber-based STED systems to enable endoscopic nanoscale imaging. We discuss the design and characteristics of a novel class of fibers supporting and stably propagating Gaussian and OAM modes. Optimization of the design parameters leads to stable excitation and depletion beams propagating in the same fiber in the visible spectral range, for the first time, with high efficiency (>99%) and mode purity (>98%). Using the fabricated vortex fiber, we demonstrate an all-fiber STED system with modes that are tolerant to perturbations, and we obtain naturally self-aligned PSFs for the excitation and depletion beams. Initial experiments of STED imaging using our device yields a 4-fold improvement in lateral resolution compared to confocal imaging. In an experiment in parallel, we show the means of using q-plates as free-space mode converters that yield alignment tolerant STED microscopy systems at wavelengths covering the entire visible spectrum, and hence dyes of interest in such imaging schematics. Our study indicates that the vortex fiber is capable of providing an all-fiber platform for STED systems, and for other imaging systems where the exploitation of spatio-spectral beam shaping is required.
12

Quantum Communication: Through the Elements: Earth, Air, Water

Sit, Alicia 24 September 2019 (has links)
This thesis encompasses a body of experimental work on the use of structured light in quantum cryptographic protocols. In particular, we investigate the ability to perform quantum key distribution through various quantum channels (fibre, free-space, underwater) in laboratory and realistic conditions. We first demonstrate that a special type of optical fibre (vortex fibre) capable of coherently transmitting vector vortex modes is a viable quantum channel. Next, we describe the first demonstration of high-dimensional quantum cryptography using structured photons in an urban setting. In particular, the prevalence of atmospheric turbulence can introduce many errors to a transmitted key; however, we are still able to transmit more information per carrier using a 4-dimensional scheme in comparison to a 2-dimensional one. Lastly, we investigate the possibility of performing secure quantum communication with twisted photons in an uncontrolled underwater channel. We find that though it is possible for low-dimensional schemes, high-dimensional schemes suffer from underwater turbulence without the use of corrective wavefront techniques.
13

Spatial Augmented Reality Using Structured Light Illumination

Yu, Ying 01 January 2019 (has links)
Spatial augmented reality is a particular kind of augmented reality technique that uses projector to blend the real objects with virtual contents. Coincidentally, as a means of 3D shape measurement, structured light illumination makes use of projector as part of its system as well. It uses the projector to generate important clues to establish the correspondence between the 2D image coordinate system and the 3D world coordinate system. So it is appealing to build a system that can carry out the functionalities of both spatial augmented reality and structured light illumination. In this dissertation, we present all the hardware platforms we developed and their related applications in spatial augmented reality and structured light illumination. Firstly, it is a dual-projector structured light 3D scanning system that has two synchronized projectors operate simultaneously, consequently it outperforms the traditional structured light 3D scanning system which only include one projector in terms of the quality of 3D reconstructions. Secondly, we introduce a modified dual-projector structured light 3D scanning system aiming at detecting and solving the multi-path interference. Thirdly, we propose an augmented reality face paint system which detects human face in a scene and paints the face with any favorite colors by projection. Additionally, the system incorporates a second camera to realize the 3D space position tracking by exploiting the principle of structured light illumination. At last, a structured light 3D scanning system with its own built-in machine vision camera is presented as the future work. So far the standalone camera has been completed from the a bare CMOS sensor. With this customized camera, we can achieve high dynamic range imaging and better synchronization between the camera and projector. But the full-blown system that includes HDMI transmitter, structured light pattern generator and synchronization logic has yet to be done due to the lack of a well designed high speed PCB.
14

3d Face Reconstruction Using Stereo Vision

Dikmen, Mehmet 01 September 2006 (has links) (PDF)
3D face modeling is currently a popular area in Computer Graphics and Computer Vision. Many techniques have been introduced for this purpose, such as using one or more cameras, 3D scanners, and many other systems of sophisticated hardware with related software. But the main goal is to find a good balance between visual reality and the cost of the system. In this thesis, reconstruction of a 3D human face from a pair of stereo cameras is studied. Unlike many other systems, facial feature points are obtained automatically from two photographs with the help of a dot pattern projected on the object&amp / #8217 / s face. It is seen that using projection pattern also provided enough feature points to derive 3D face roughly. These points are then used to fit a generic face mesh for a more realistic model. To cover this 3D model, a single texture image is generated from the initial stereo photographs.
15

3d Face Reconstruction Using Stereo Images And Structured Light

Ozturk, Oguz Ahmet 01 December 2007 (has links) (PDF)
Nowadays, 3D modelling of objects from multiple images is a topic that has gained great recognition and is widely used in various fields. Recently, lots of progress has been made in identification of people using 3D face models, which are usually reconstructed from multiple face images. In this thesis, a system including stereo cameras and structured light is built for the purpose of 3D modelling. The system outputs are 3D shapes of the face and also the texture information registered to this shape. Although the system in this thesis is developed for face reconstruction, it is not specific to faces. Using the same methodology proposed in this study 3D reconstruction of any object can be achieved.
16

A 3D Computer Vision System in Radiotherapy Patient Setup

Chyou, Te-yu January 2012 (has links)
An approach to quantitatively determine patient surface contours as part of an augmented reality (AR) system for patient position and posture correction was developed. Quantitative evaluation of the accuracy of patient positioning and posture correction requires the knowledge of coordinates of the patient contour. The system developed uses the surface contours from the planning CT data as the reference surface coordinates. The corresponding reference point cloud is displayed on screen to enable AR assisted patient positioning. A 3D computer vision system using structured light then captures the current 3D surface of the patient. The offset between the acquired surface and the reference surface, representing the desired patient position, is the alignment error. Two codification strategies, spatial encoding, and temporal encoding, were examined. Spatial encoding methods require a single static pattern to work, thus enabling dynamic scenes to be captured. Temporal encoding methods require a set of patterns to be successively projected onto the object, the encoding for each pixel is only complete when the entire series of patterns has been projected. The system was tested on a camera tracking object. The structured light reconstruction was accurate to within ±1 mm, ±1.5 mm, and ±4 mm in x, y, and z-directions (camera optical axis) respectively. The method was integrated into a simplified AR system and a visualization scheme based on z-direction offset was developed. A demonstration of how the final AR-3D vision hybrid system can be used in a clinical situation was given using an anatomical teaching phantom. The system and visualisation worked well and demonstrated the proof of principal of the approach. It was found that the achieved accuracy was not yet sufficient for clinical use. Further work on improving the projector calibration accuracy is required. Both the camera registration process and 3D computer vision using structured light have been shown to be capable of sub-millimeter accuracy on their own. If that level of accuracy can be reproduced in this system, the concept presented can potentially be used in Oncology departments as a cost-effective patient setup guidance system for external beam radiotherapy, used in addition to current laser/portal imaging/cone beam CT based setup procedures.
17

Novel Approaches in Structured Light Illumination

Wang, Yongchang 01 January 2010 (has links)
Among the various approaches to 3-D imaging, structured light illumination (SLI) is widely spread. SLI employs a pair of digital projector and digital camera such that the correspondences can be found based upon the projecting and capturing of a group of designed light patterns. As an active sensing method, SLI is known for its robustness and high accuracy. In this dissertation, I study the phase shifting method (PSM), which is one of the most employed strategy in SLI. And, three novel approaches in PSM have been proposed in this dissertation. First, by regarding the design of patterns as placing points in an N-dimensional space, I take the phase measuring profilometry (PMP) as an example and propose the edge-pattern strategy which achieves maximum signal to noise ratio (SNR) for the projected patterns. Second, I develop a novel period information embedded pattern strategy for fast, reliable 3-D data acquisition and reconstruction. The proposed period coded phase shifting strategy removes the depth ambiguity associated with traditional phase shifting patterns without reducing phase accuracy or increasing the number of projected patterns. Thus, it can be employed for high accuracy realtime 3-D system. Then, I propose a hybrid approach for high quality 3-D reconstructions with only a small number of illumination patterns by maximizing the use of correspondence information from the phase, texture, and modulation data derived from multi-view, PMP-based, SLI images, without rigorously synchronizing the cameras and projectors and calibrating the device gammas. Experimental results demonstrate the advantages of the proposed novel strategies for 3-D SLI systems.
18

ENHANCEMENTS TO THE MODIFIED COMPOSITE PATTERN METHOD OF STRUCTURED LIGHT 3D CAPTURE

Casey, Charles Joseph 01 January 2011 (has links)
The use of structured light illumination techniques for three-dimensional data acquisition is, in many cases, limited to stationary subjects due to the multiple pattern projections needed for depth analysis. Traditional Composite Pattern (CP) multiplexing utilizes sinusoidal modulation of individual projection patterns to allow numerous patterns to be combined into a single image. However, due to demodulation artifacts, it is often difficult to accurately recover the subject surface contour information. On the other hand, if one were to project an image consisting of many thin, identical stripes onto the surface, one could, by isolating each stripe center, recreate a very accurate representation of surface contour. But in this case, recovery of depth information via triangulation would be quite difficult. The method described herein, Modified Composite Pattern (MCP), is a conjunction of these two concepts. Combining a traditional Composite Pattern multiplexed projection image with a pattern of thin stripes allows for accurate surface representation combined with non-ambiguous identification of projection pattern elements. In this way, it is possible to recover surface depth characteristics using only a single structured light projection. The technique described utilizes a binary structured light projection sequence (consisting of four unique images) modulated according to Composite Pattern methodology. A stripe pattern overlay is then applied to the pattern. Upon projection and imaging of the subject surface, the stripe pattern is isolated, and the composite pattern information demodulated and recovered, allowing for 3D surface representation. In this research, the MCP technique is considered specifically in the context of a Hidden Markov Process Model. Updated processing methodologies explained herein make use of the Viterbi algorithm for the purpose of optimal analysis of MCP encoded images. Additionally, we techniques are introduced which, when implemented, allow fully automated processing of the Modified Composite Pattern image.
19

Rotate and Hold and Scan (RAHAS): Structured Light Illumination for Use in Remote Areas

Crane, Eli Ross 01 January 2011 (has links)
As a critical step after the discovery of material culture in the field, archaeologists have a need to document these findings with a slew of different physical measurements and photographs from varying perspectives. 3-D imaging is becoming increasingly popular as the primary documenting method to replace the plethora of tests and measurements, but for remote areas 3-D becomes more cumbersome due to physical and environmental constraints. The difficulty of using a 3-D imaging system in such environments is drastically lessened while using the RAHAS technique, since it acquires scans untethered to a computer. The goal of this thesis is to present the RAHAS Structured Light Illumination technique for 3-D image acquisition, and the performance of the RAHAS technique as a measurement tool for documentation of material culture on a field trip to the Rio Platano Biosphere in Honduras.
20

MERGING OF FINGERPRINT SCANS OBTAINED FROM MULTIPLE CAMERAS IN 3D FINGERPRINT SCANNER SYSTEM

Boyanapally, Deepthi 01 January 2008 (has links)
Fingerprints are the most accurate and widely used biometrics for human identification due to their uniqueness, rapid and easy means of acquisition. Contact based techniques of fingerprint acquisition like traditional ink and live scan methods are not user friendly, reduce capture area and cause deformation of fingerprint features. Also, improper skin conditions and worn friction ridges lead to poor quality fingerprints. A non-contact, high resolution, high speed scanning system has been developed to acquire a 3D scan of a finger using structured light illumination technique. The 3D scanner system consists of three cameras and a projector, with each camera producing a 3D scan of the finger. By merging the 3D scans obtained from the three cameras a nail to nail fingerprint scan is obtained. However, the scans from the cameras do not merge perfectly. The main objective of this thesis is to calibrate the system well such that 3D scans obtained from the three cameras merge or align automatically. This error in merging is reduced by compensating for radial distortion present in the projector of the scanner system. The error in merging after radial distortion correction is then measured using the projector coordinates of the scanner system.

Page generated in 0.0631 seconds