• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 283
  • 46
  • 25
  • 23
  • 9
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 413
  • 413
  • 413
  • 117
  • 112
  • 86
  • 81
  • 49
  • 48
  • 47
  • 44
  • 41
  • 40
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Visualization of Surfaces and 3D Vector Fields

Li, Wentong 08 1900 (has links)
Visualization of trivariate functions and vector fields with three components in scientific computation is still a hard problem in compute graphic area. People build their own visualization packages for their special purposes. And there exist some general-purpose packages (MatLab, Vis5D), but they all require extensive user experience on setting all the parameters in order to generate images. We present a simple package to produce simplified but productive images of 3-D vector fields. We used this method to render the magnetic field and current as solutions of the Ginzburg-Landau equations on a 3-D domain.
222

Focusing ISAR images using fast adaptive time-frequency and 3D motion detection on simulated and experimental radar data / Focusing inverse synthetic aperture radar images using fast adaptive time-frequency and three-dimensional motion detection on simulated and experimental radar data

Brinkman, Wade H. 06 1900 (has links)
Optimization algorithms were developed for use with the Adaptive Joint Time-Frequency (AJFT) algorithm to reduce Inverse Synthetic Aperture Radar (ISAR) image blurring caused by higher-order target motion. A specific optimization was then applied to 3D motion detection. Evolutionary search methods based on the Genetic Algorithm (GA) and the Particle Swarm Optimization (PSO) algorithm were designed to rapidly traverse the solution space in order to find the parameters that would bring the ISAR image into focus in the cross-range. 3D motion detection was achieved by using the AJTF PSO to extract the phases of 3 different point scatterers in the target data and measuring their linearity when compared to an ideal phase for the imaging interval under investigation. The algorithms were tested against both simulated and real ISAR data sets.
223

A quality assessment approach and a hole-filling method for DIBR virtual view images

Mao, Dun January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
224

Use of structured light for 3D reconstruction. / Use of structured light for three-dimensional reconstruction / CUHK electronic theses & dissertations collection

January 2008 (has links)
An accurate and convenient system for calibrating projector-camera system is presented. A consumer-grade LCD panel is used in place of the traditional printed pattern as the calibration plane. While patterns shown on the panel are used for camera calibration, when we turn the panel off (with its pose kept in space), patterns illuminated by the projector and reflected from it and captured by the camera can be used for the calibration of the projector. This way, patterns for calibrating the camera will not overlap with patterns for calibrating the projector, avoiding confusion in the image data. In addition, even a household-quality LCD panel has industrial-grade planarity. Experiments show that a setup as affordable as this can still have the system parameters calibrated in far less images with much higher accuracy. / Finally, we explore how coding in structured light mechanism can be made even unnecessary. We adopt the above concept of recovering surface orientation from grid-lines, and show that by the use of a regular pattern, like a binary pattern with rhombic pattern elements, an orientation map about the imaged object can be recovered. Specifically, we show that here the correspondences over grid-lines between the projector's pattern panel and the camera's image plane can be approximated with a linear mapping and in turn boost the accuracy of surface normals calculation. We go on and show that, as long as no less than one reference point on the imaged object is available where absolute 3D is known, the above orientation map can even be converted to an absolute depth map by a simple integration process. (Abstract shortened by UMI.) / On the coding issue, we investigate a number of options. Coding can be established over time, and one widely used scheme in this direction is the adoption of Gray code in a series of binary patterns that are projected at different instants. We describe how the traditional Gray code patterns, if augmented by the use of strip shirting, can have the resolution of 3D reconstruction enhanced. The whole system setup of such a system is affordable, nonetheless experiments show that high accuracy can be achieved based on it. The disadvantage of such a system is mainly that multiple image captures are necessary for the operation. / On the image features for establishing correspondences between the projector's pattern panel and the camera's image plane, we propose to use rhombic pattern with binary (i.e., black and white) or colored elements as the projected pattern, and the grid-points between neighboring rhombic elements but not the centroids of the pattern elements themselves as the feature points. We show that, grid-point in the pattern owns the two-fold rotation symmetry, or so-called cmm symmetry, which is largely preserved on the image side after perspective projection and imaging. We propose a grid-point detector that exploits such a symmetry. By avoiding the direct use of raw image intensity, the detector is less sensitive to image noise and surface texture. Comparison with traditional operators shows its promising robustness and accuracy. / The adoption of structured light illumination has been proven an effective and accurate visual means for 3D reconstruction. The system consists of a projector that illuminates controlled pattern or patterns to the target object, and a camera grabbing image or images of the illuminated object. Once correspondences between positions on the projector's pattern panel and positions on the camera's image plane are established, simple triangulation over light rays from the projector and the corresponding light rays to the camera would recover 3D information about the target object. Key issues involved in the approach include (1) Calibration: how the projector and camera can be calibrated so that metric measures about the object can be extracted from the image data; (2) Image Feature Extraction: what image features to use and how to extract them accurately from the image data; and (3) Coding : how the illuminated pattern can be designed so that each position of it embeds in the pattern a unique code which is preserved on the image side, so that position correspondences between the projector's pattern panel and the camera's image plane can be easily established. Each of the issues can affect the accuracy of the system. This thesis aims at providing improved solutions to each of these issues. / Song, Zhan. / Adviser: Ronald Chung. / Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3615. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 149-160). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
225

Aesthetic surface pattern generation using L-system. / CUHK electronic theses & dissertations collection

January 2013 (has links)
Chan, Pui Lam. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 72-75). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese.
226

Development of a Positioning System for 3D Ultrasound

Poulsen, Carsten 18 October 2005 (has links)
"Ultrasound has developed from 2D into 3D ultrasound in recent years. 3D ultrasound gives enhanced diagnostic capabilities and can make it easier for less trained people to interpret ultrasound images. In general there are two ways of getting a 3D ultrasound image : By using a 2D array scanner (giving 3D images directly) or by using a series of 2D scans and combine these scans to build a 3D volume. The only practical scanning technique that can be used for portable systems is freehand scanning that combines a series of 2D images. 3D images acquired by using a conventional ultrasound transducer and the freehand scanning technique are, however, often misaligned laterally and have unevenly spacing. These errors can be corrected if the position associated with each 2D image is known. Commercially available positioning systems use magnetic or optical tracking, but these systems are very bulky and not portable. We have proposed another way to get the position by tracking on the skin surface. This is done by obtaining digital images of the surface at a very high rate and then cross correlating each image to reveal the change in position. Accumulating these changes will then give the correct location (in two dimensions) relative to a starting point. Correct volume and surface rendering can therefore be achieved when a scan is done. A custom-made housing was made to mount an optical sensor to the ultrasound transducer. The optical sensor was placed in the housing and the hardware circuit from an optical mouse was used to interface to a USB interface. An implementation with an optical fiber was also made since this could fit easily to the transducer handle. In Windows a custom-made mouse driver was used to extract the position information from the sensor. This driver allowed multiple mouse devices in the system and removed the acceleration of the mouse, giving a correct transfer of the position. A DLL (Dynamic Link Library) was used to interface to a 3D ultrasound software called Sonocubic. Using the DLL and a custom modified version of Sonocubic 3D construction software has allowed a correct compensation of the acquired ultrasound images. To validate the accuracy of the optical sensor an optical mouse was placed in an XY-recorder to compare the acquired position with the actual position. The test revealed that the accuracy of the optical sensor is very high. A 55 mm movement of the sensor gave a deviation of 0.56 mm which is well within the expected result. A computer generated phantom was made to see if the compensation algorithm was working. The test revealed that the compensation algorithm and the software is working perfectly. Next a vessel phantom was scanned to see that the compensation algorithm (lateral compensation) was working in real life. The test showed that a correct lateral compensation was made. Finally 3D phantoms were custom made to test the accuracy of the system by estimation of a known volume. The system was able to estimate the volume in a phantom within an accuracy of 6 %. Performance of the system with direct imaging, using the optical sensor and a lens, was compared to an implementation with an optical fiber, two lenses and the optical sensor. The optical fiber was difficult to implement since the image contrast was degraded severely through the optical fiber and the lenses. This made it difficult for the correlation algorithm to function correctly and tracking could therefore not be done on a skin surface. Code for an FPGA was made in VHDL to extract the actual images from the optical sensor and display them directly on a computer screen. This was necessary to see how well the sensor was in focus. This proved to be a really useful tool for adjusting the optical system for maximal contrast. The optical tracking on a skin surface is a good way to assist a user doing a freehand scanning to get images without geometric distortion. Furthermore, it is the only real positioning system for a portable system. One requirement for this system is, however, that the object being scanned is flat and does not curve or vary vertically. For most applications this is not the case, and we are therefore proposing an implementation with microgyros that is able to give angle information as well. This would give the system a total of up to 5 instead of just 2 degrees of freedom. The status of this is currently that it can be easily implemented in the DLL, but it is not implemented in the 3D reconstruction software, Sonocubic."
227

2D and 3D high-speed multispectral optical imaging systems for in-vivo biomedical research

Bouchard, Matthew Bryan January 2014 (has links)
Functional optical imaging encompasses the use of optical imaging techniques to study living biological systems in their native environments. Optical imaging techniques are well-suited for functional imaging because they are minimally-invasive, use non ionizing radiation, and derive contrast from a wide range of biological molecules. Modern transgenic labeling techniques, active and inactive exogenous agents, and intrinsic sources of contrast provide specific and dynamic markers of in-vivo processes at subcellular resolution. A central challenge in building functional optical imaging systems is to acquire data at high enough spatial and temporal resolutions to be able to resolve the in-vivo process(es) under study. This challenge is particularly highlighted within neuroscience where considerable effort in the field has focused on studying the structural and functional relationships within complete neurovascular units in the living brain. Many existing functional optical techniques are limited in meeting this challenge by their imaging geometries, light source(s), and/or hardware implementations. In this thesis we describe the design, construction, and application of novel 2D and 3D optical imaging systems to address this central challenge with a specific focus on functional neuroimaging applications. The 2D system is an ultra-fast, multispectral, wide-field imaging system capable of imaging 7.5 times faster than existing technologies. Its camera-first design allows for the fastest possible image acquisition rates because it is not limited by synchronization challenges that have hindered previous multispectral systems. We present the development of this system from a bench top instrument to a portable, low-cost, modular, open source, laptop based instrument. The constructed systems can acquire multispectral images at >75 frames per second with image resolutions up to 512 x 512 pixels. This increased speed means that spectral analysis more accurately reflects the instantaneous state of tissues and allows for significantly improved tracking of moving objects. We describe 3 quantitative applications of these systems to in-vivo research and clinical studies of cortical imaging and calcium signaling in stem cells. The design and source code of the portable system was released to the greater scientific community to help make high-speed, multispectral imaging more accessible to a larger number of dynamic imaging applications, and to foster further development of the software package. The second system we developed is an entirely new, high-speed, 3D fluorescence microscopy platform called Laser-Scanning Intersecting Plane Tomography (L-SIPT). L-SIPT uses a novel combination of light-sheet illumination and off-axis detection to provide en-face 3D imaging of samples. L-SIPT allows samples to move freely in their native environments, enabling a range of experiments not possible with previous 3D optical imaging techniques. The constructed system is capable of acquiring 3D images at rates >20 volumes per second (VPS) with volume resolutions of 1400 x 50 x 150 pixels, over a 200 fold increase over conventional laser scanning microscopes. Spatial resolution is set by choice of telescope design. We developed custom opto-mechanical components, computer raytracing models to guide system design and to characterize the technique's fundamental resolution limits, and phantoms and biological samples to refine the system's performance capabilities. We describe initial applications development of the system to image freely moving, transgenic Drosophila Melanogaster larvae, 3D calcium signaling and hemodynamics in transgenic and exogenously labeled rodent cortex in-vivo, and 3D calcium signaling in acute transgenic rodent cortical brain slices in-vitro.
228

Generic template based 3D object reconstruction using regional partitioning.

January 2006 (has links)
Tong Kai Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 76-80). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.1 / Chapter 1.2 --- Previous and related works --- p.2 / Chapter 1.3 --- The Proposed Method --- p.4 / Chapter 1.4 --- Thesis outline --- p.6 / Chapter 2. --- Global deformation --- p.8 / Chapter 2.1 --- Feature points --- p.8 / Chapter 2.2 --- The deformation --- p.9 / Chapter 2.2.1 --- Deformation using affine transformation --- p.9 / Chapter 2.2.2 --- Elastic warping using Radial Basis Functions --- p.12 / Chapter 2.2.3 --- Biharmonic and triharmonic basic functions --- p.16 / Chapter 3. --- Local iterative surface fitting --- p.19 / Chapter 3.1 --- Basic closest point method --- p.19 / Chapter 3.2 --- Regional partitioning method --- p.27 / Chapter 3.2.1 --- Defining the regions --- p.29 / Chapter 3.2.2 --- Propagating from the seeds --- p.31 / Chapter 3.2.3 --- Handling the distortions --- p.36 / Chapter 3.3 --- Combined methods for surface fitting --- p.41 / Chapter 3.3.1 --- Summary of the surface fitting methods --- p.41 / Chapter 3.3.2 --- Combining the methods --- p.43 / Chapter 3.3.3 --- Fine-level fitting results --- p.47 / Chapter 4. --- Enhanced template based 3D Object reconstruction --- p.53 / Chapter 4.1 --- Compactly supported radial basis functions --- p.53 / Chapter 4.2 --- Reconstruction using two templates --- p.55 / Chapter 5. --- Implementations and Results --- p.60 / Chapter 5.1 --- Creation of 3D objects --- p.60 / Chapter 5.2 --- Feature points selection --- p.61 / Chapter 5.3 --- Experiment platform --- p.62 / Chapter 5.4 --- Results --- p.63 / Chapter 6. --- Conclusions --- p.71 / Chapter 6.1 --- Contributions --- p.72 / Chapter 6.2 --- Future developments --- p.72 / Appendix A --- p.73 / Voxel based closest point evaluation --- p.73 / References --- p.76
229

Modeling and rendering from multiple views. / CUHK electronic theses & dissertations collection

January 2006 (has links)
The first approach, described in the first part of this thesis, studies 3D face modeling from multi-views. Today human face modeling and animation techniques are widely used to generate virtual characters and models. Such characters and models are used in movies, computer games, advertising, news broadcasting and other activities. We propose an efficient method to estimate the poses, the global shape and the local structures of a human head recorded in multiple face images or a video sequence by using a generic wireframe face model. Based on this newly proposed method, we have successfully developed a pose invariant face recognition system and a pose invariant face contour extraction method. / The objective of this thesis is to model and render complex scenes or objects from multiple images taken from different viewpoints. Two approaches to achieve this objective were investigated in this thesis. The first one is for known objects with prior geometrical models, which can be deformed to match the objects recorded in multiple input images. The second one is for general scenes or objects without prior geometrical models. / The proposed algorithms in this thesis were tested on many real and synthetic data. The experimental results illustrate their efficiency and limitations. / The second approach, described in the second part of this thesis, investigates 3D modeling and rendering for general complex scenes. The entertainment industry touches hundreds of millions of people every day, and synthetic pictures and 3D reconstruction of real scenes, often mixed with actual film footage, are now common place in computer games, sports broadcasting, TV advertising and feature films. A series of techniques has been developed to complete this task. First, a new view-ordering algorithm was proposed to organize and order an unorganized image database. Second, a novel and efficient multiview feature matching approach was developed to calibrate and track all views. Finally, both match propagation based and Bayesian based methods were developed to produce 3D scene models for rendering. / Yao Jian. / "September 2006." / Adviser: Wai-Kuen Chan. / Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1849. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (p. 170-181). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
230

Three dimensional motion tracking using micro inertial measurement unit and monocular visual system. / 應用微慣性測量單元和單目視覺系統進行三維運動跟踪 / Ying yong wei guan xing ce liang dan yuan he dan mu shi jue xi tong jin xing san wei yun dong gen zong

January 2011 (has links)
Lam, Kin Kwok. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 99-103). / Abstracts in English and Chinese. / Abstract --- p.ii / 摘要 --- p.iii / Acknowledgements --- p.iv / Table of Contents --- p.v / List of Figures --- p.viii / List of Tables --- p.xi / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Intrinsic Problem of Today's Pose Estimation Systems --- p.1 / Chapter 1.2 --- Multi-sensors Data Fusion --- p.2 / Chapter 1.3 --- Objectives and Contributions --- p.3 / Chapter 1.4 --- Organization of the dissertation --- p.4 / Chapter Chapter 2 --- Architecture of Sensing System --- p.5 / Chapter 2.1 --- Hardware for Pose Estimation System --- p.5 / Chapter 2.2 --- Software for Pose Estimation System --- p.6 / Chapter Chapter 3 --- Inertial Measurement System --- p.7 / Chapter 3.1 --- Basic knowledge of Inertial Measurement System --- p.7 / Chapter 3.2 --- Strapdown Inertial Navigation --- p.8 / Chapter 3.2.1 --- Tracking Orientation --- p.9 / Chapter 3.2.2 --- Discussion of Attitude Representations --- p.14 / Chapter 3.2.3 --- Tracking Position --- p.16 / Chapter 3.3 --- Summary of Strapdown Inertial Navigation --- p.16 / Chapter Chapter 4 --- Visual Tracking System --- p.17 / Chapter 4.1 --- Background of Visual Tracking System --- p.17 / Chapter 4.2 --- Basic knowledge of Camera Calibration and Model --- p.18 / Chapter 4.2.1 --- Related Coordinate Frames --- p.18 / Chapter 4.2.2 --- Pinhole Camera Model --- p.20 / Chapter 4.2.3 --- Calibration for Nonlinear Model --- p.21 / Chapter 4.3 --- Implementation of Process to Calibrate Camera --- p.22 / Chapter 4.3.1 --- Image Capture and Corners Extraction --- p.22 / Chapter 4.3.2 --- Camera Calibration --- p.23 / Chapter 4.4 --- Perspective-n-Point Problem --- p.25 / Chapter 4.5 --- Camera Pose Estimation Algorithms --- p.26 / Chapter 4.5.1 --- Pose Estimation Using Quadrangular Targets --- p.27 / Chapter 4.5.2 --- Efficient Perspective-n-Point Camera Pose Estimation --- p.31 / Chapter 4.5.3 --- Linear N-Point Camera Pose Determination --- p.33 / Chapter 4.5.4 --- Pose Estimation from Orthography and Scaling with Iterations --- p.36 / Chapter 4.6 --- Experimental Results of Camera Pose Estimation Algorithms --- p.40 / Chapter 4.6.1 --- Simulation Test --- p.40 / Chapter 4.6.2 --- Real Images Test --- p.43 / Chapter 4.6.3 --- Summary --- p.46 / Chapter Chapter 5 --- Kalman Filter --- p.47 / Chapter 5.1 --- Linear Dynamic System Model --- p.48 / Chapter 5.2 --- Time Update --- p.48 / Chapter 5.3 --- Measurement Update --- p.49 / Chapter 5.3.1 --- Maximum a Posterior Probability --- p.49 / Chapter 5.3.2 --- Batch Least-Square Estimation --- p.51 / Chapter 5.3.3 --- Measurement Update in Kalman Filter --- p.54 / Chapter 5.4 --- Summary of Kalman Filter --- p.56 / Chapter Chapter 6 --- Extended Kalman Filter --- p.58 / Chapter 6.1 --- Linearization of Nonlinear Systems --- p.58 / Chapter 6.2 --- Extended Kalman Filter --- p.59 / Chapter Chapter 7 --- Unscented Kalman Filter --- p.61 / Chapter 7.1 --- Least-square Estimator Structure --- p.61 / Chapter 7.2 --- Unscented Transform --- p.62 / Chapter 7.3 --- Unscented Kalman Filter --- p.64 / Chapter Chapter 8 --- Data Fusion Algorithm --- p.68 / Chapter 8.1 --- Traditional Multi-Sensor Data Fusion --- p.69 / Chapter 8.1.1 --- Measurement Fusion --- p.69 / Chapter 8.1.2 --- Track-to-Track Fusion --- p.71 / Chapter 8.2 --- Multi-Sensor Data Fusion using Extended Kalman Filter --- p.72 / Chapter 8.2.1 --- Time Update Model --- p.73 / Chapter 8.2.2 --- Measurement Update Model --- p.74 / Chapter 8.3 --- Multi-Sensor Data Fusion using Unscented Kalman Filter --- p.75 / Chapter 8.3.1 --- Time Update Model --- p.75 / Chapter 8.3.2 --- Measurement Update Model --- p.76 / Chapter 8.4 --- Simulation Test --- p.76 / Chapter 8.5 --- Experimental Test --- p.80 / Chapter 8.5.1 --- Rotational Test --- p.81 / Chapter 8.5.2 --- Translational Test --- p.86 / Chapter Chapter 9 --- Future Work --- p.93 / Chapter 9.1 --- Zero Velocity Compensation --- p.93 / Chapter 9.1.1 --- Stroke Segmentation --- p.93 / Chapter 9.1.2 --- Zero Velocity Compensation (ZVC) --- p.94 / Chapter 9.1.3 --- Experimental Results --- p.94 / Chapter 9.2 --- Random Sample Consensus Algorithm (RANSAC) --- p.96 / Chapter Chapter 10 --- Conclusion --- p.97 / Bibliography --- p.99

Page generated in 0.1049 seconds