• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 32
  • 11
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Determining the interwall spacing in carbon nanotubes by using transmission electron microscopy / Undersökning av väggavstånden i kolnanorör med hjälp av transmissions-elektronmikroskopi

Tyborowski, Tobias January 2016 (has links)
The interwall spacing of multi-walled carbon nanotubes has an effect on their physical and chemical properties. Tubes with larger interwall spacing - compared to the spacing where the carbon atoms are in their natural distance to each other - are for instance expected to be mechanically less stable. Considering the MWCNT interwall spacing’s dependence on the tube size, three interesting previous studies with slightly different conclusions can be found. All of them conclude an increase of the interwall spacing with a decreasing tube size. We describe their analysis procedure, compare them to each other and to our own measured data. In the beginning of our analyses, we determine the expected inaccuracy for measured distances out of TEM images being up to 10 % and we show the impacts of the TEM’s defocus, a powerful setting in TEM imaging. Finally, we suppose that the interwall spacings are not as strongly varying as one previous study concludes, but our analyses are relatively in harmony with the two other studies. The interwall spacings from tubes with an inner diameter larger than 5 nm are relatively constant within the whole tube. Furthermore, it appears that the middle spacings (excluding the outer- and innermost ones) show values that are most consistent with the interlayer spacings of turbostratic graphite. In underfocused images, the outer- and innermost spacings tend to have values being slightly smaller than the middle ones from the same tube.
12

ESTIMATION OF DEPTH FROM DEFOCUS BLUR IN VIRTUAL ENVIRONMENTS COMPARING GRAPH CUTS AND CONVOLUTIONAL NEURAL NETWORK

Prodipto Chowdhury (5931032) 17 January 2019 (has links)
Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
13

Design and analysis of a phase mask for mutifocusing

Guo, Jian-You 07 September 2011 (has links)
The image quality will degrade if the misfocusing problem occurs in the imaging system. This paper is aimed to design and analyze a phase mask for mutifocusing problem. Depth of field is the range to get a clear image. As the lens can only gather the light in a fixed range. Image will be more blurred when it is more from this range. In 1995 Dowski and Cathey proposed the wave-front coding to increase the system's depth of field so that the image will less susceptible to blur due to the mutifocusing problem. A treatment with a mask before the lens can extend the depth of field. In this paper, we extend to multi-levels phase mask. The simulation results show that multi-level phase mask has a better performance than the two-level phase mask.
14

New Signal Processing Methods for Blur Detection and Applications

January 2019 (has links)
abstract: The depth richness of a scene translates into a spatially variable defocus blur in the acquired image. Blurring can mislead computational image understanding; therefore, blur detection can be used for selective image enhancement of blurred regions and the application of image understanding algorithms to sharp regions. This work focuses on blur detection and its application to image enhancement. This work proposes a spatially-varying defocus blur detection based on the quotient of spectral bands; additionally, to avoid the use of computationally intensive algorithms for the segmentation of foreground and background regions, a global threshold defined using weak textured regions on the input image is proposed. Quantitative results expressed in the precision-recall space as well as qualitative results overperform current state-of-the-art algorithms while keeping the computational requirements at competitive levels. Imperfections in the curvature of lenses can lead to image radial distortion (IRD). Computer vision applications can be drastically affected by IRD. This work proposes a novel robust radial distortion correction algorithm based on alternate optimization using two cost functions tailored for the estimation of the center of distortion and radial distortion coefficients. Qualitative and quantitative results show the competitiveness of the proposed algorithm. Blur is one of the causes of visual discomfort in stereopsis. Sharpening applying traditional algorithms can produce an interdifference which causes eyestrain and visual fatigue for the viewer. A sharpness enhancement method for stereo images that incorporates binocular vision cues and depth information is presented. Perceptual evaluation and quantitative results based on the metric of interdifference deviation are reported; results of the proposed algorithm are competitive with state-of-the-art stereo algorithms. Digital images and videos are produced every day in astonishing amounts. Consequently, the market-driven demand for higher quality content is constantly increasing which leads to the need of image quality assessment (IQA) methods. A training-free, no-reference image sharpness assessment method based on the singular value decomposition of perceptually-weighted normalized-gradients of relevant pixels in the input image is proposed. Results over six subject-rated publicly available databases show competitive performance when compared with state-of-the-art algorithms. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
15

Estimation of Defocus Blur in Virtual Environments Comparing Graph Cuts and Convolutional Neural Network

Chowdhury, Prodipto 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
16

Depth Estimation Methodology for Modern Digital Photography

Sun, Yi 01 October 2019 (has links)
No description available.
17

Depth From Defocused Motion

Myles, Zarina 01 January 2004 (has links)
Motion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move. We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters. We can handle the situation where a single image has points which have defocused, got sharper or are focally unperturbed. Moreover, our formulation is valid regardless of whether the defocus is due to the image plane being in front of or behind the point of sharp focus.The blur-depth relationship requires a sequence of at least three images taken with the camera moving either towards or away from the object. It can be used to obtain an initial estimate of relative depth using one of several non-linear methods. We demonstrate a solution based on the Extended Kalman Filter in which the measurement equation is the blur-depth relationship. The estimate of relative depth is then used to compute an initial estimate of camera motion parameters. In order to refine depth values, the values of relative depth and camera motion are then input into a second Extended Kalman Filter in which the measurement equations are the discrete motion equations. This set of cascaded Kalman filters can be employed iteratively over a longer sequence of images in order to further refine depth. We conduct several experiments on real scenery in order to demonstrate the range of object shapes that the algorithm can handle. We show that fairly good estimates of depth can be obtained with just three images.
18

Peripheral Refractive Error and its Association with Myopia Development and Progression. An examination of the role that peripheral retinal defocus may play in the origin and progression of myopia

Jamal, Heshow January 2019 (has links)
Purpose: Currently there are attempts to slow myopia progression by manipulating peripheral refractive error. This study proposed to establish the distribution of peripheral refractive errors in hyperopic, emmetropic and myopic children and to test the hypothesis that relative peripheral hyperopia is a risk factor in the onset and progression of myopia. Methods: Refraction was measured under non-cycloplegic conditions, at 0°, 10° (superior, inferior, temporal and nasal retina) and 30° (temporal and nasal retina), at distance and near. Central spherical equivalent refractive error (SER) was used to classify the eyes as myopic (≤ −0.75 D), emmetropic (−0.75 < SER < +0.75 D) or hyperopic (≥ +0.75 D). Relative peripheral refraction was calculated as the difference between the central (i.e. foveal) and peripheral refractive measurements. At baseline, measurements were taken from 554 children and in a subset of 300 of these same children at the follow-up visit. The time interval between initial and follow-up measurement was 9.71 ± 0.87 months. Results: Results were analysed on 528 participants (10.21 ±0.94 years old) at baseline and 286 longitudinally. At baseline, myopic children (n=61) had relative peripheral hyperopia at all eccentricities at distance and near, except at 10°-superior retina where relative peripheral myopia was observed at near. Hyperopic eyes displayed relative peripheral myopia at all eccentricities, at distance and near. The emmetropes showed a shift from relative peripheral myopia at distance to relative peripheral hyperopia at near at all eccentricities, except at 10°-superior retina, where the relative peripheral myopia was maintained at near. In the longitudinal data analysis, myopes who became more myopic did not show greater relative peripheral hyperopia at baseline compared with myopic sub-groups whose central refraction remained stable. Conclusions: The peripheral refractive profile differences between different refractive groups that are reported in other studies have been confirmed in this study. Relative peripheral hyperopia is not found to be a significant risk factor in the onset or progression of myopia in children.
19

Assessment of ocular accommodation in humans

Szostek, Nicola January 2017 (has links)
Accommodation is the change in the dioptric power of the eye altering the focus from distance to near. Presbyopia is the loss of accommodative function that occurs with age. There are many techniques used to measure accommodation, however, there is little consensus as to how clinical data should be collected and analysed. The overarching theme of this thesis is the in vivo examination of accommodation and how lifestyle can affect the onset of presbyopia. An open-field autorefractor with badal adaption was used to examine accommodative dynamic profiles under varying demands of vergence. From this data a new metric for assessing the time for accommodative change was derived. Furthermore this thesis describes a bespoke automated accommodative facility instrument that was developed to provide further assessment of accommodative speeds. Defocus curves are used for assessing accommodation and depth-of-focus; the work presented explores the use of non-linear regression models to define the most appropriate method of assessing defocus curves in phakic subjects, and pseudophakic subjects implanted with an extended depth-of-focus intraocular lens. Using an absolute cut-off criteria of +0.30logMAR improved the repeatability and reliability of the depth-of-focus metrics over a cut-off criteria relative to the best corrected visual acuity. A swept-source anterior segment optical coherence tomographer (AS-OCT) was used to image the morphology of the ciliary muscle during accommodation. The accuracy of ciliary muscle measurements was improved when using reference points on the sclera to align the AS-OCT scan. The use of a ciliary muscle area metric demonstrated poor repeatability and reliability when compared to the traditional assessment of muscle morphology via thickness measurements. Physiological ageing in the crystalline lens occurs in line with ageing in other structures in the body. The methods for assessing accommodative function examined in previous chapters, were used to examine whether lifestyle factors which affect the rate of systemic ageing, such as smoking, also affect accommodative function. Although being a current smoker and having greater central adiposity was associated with a slower time for accommodative change, further research is required before these findings can be applied to the target population.
20

A novel 3D recovery method by dynamic (de)focused projection

Lertrusdachakul, Intuon 30 November 2011 (has links) (PDF)
This paper presents a novel 3D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all of the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object's depths. When the projected patterns are out of focus, their Point Spread Function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object's depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.

Page generated in 0.0312 seconds