• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 75
  • 63
  • 60
  • 55
  • 43
  • 37
  • 37
  • 33
  • 30
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Using Motion Fields to Estimate Video Utility and Detect GPS Spoofing

Carroll, Brandon T. 08 August 2012 (has links) (PDF)
This work explores two areas of research. The first is the development of a video utility metric for use in aerial surveillance and reconnaissance tasks. To our knowledge, metrics that compute how useful aerial video is to a human in the context of performing tasks like detection, recognition, or identification (DRI) do not exist. However, the Targeting Task Performance (TTP) metric was previously developed to estimate the usefulness of still images for DRI tasks. We modify and extend the TTP metric to create a similar metric for video, called Video Targeting Task Performance (VTTP). The VTTP metric accounts for various things like the amount of lighting, motion blur, human vision, and the size of an object in the image. VTTP can also be predictively calculated to estimate the utility that a proposed flight path will yield. This allows it to be used to help automate path planning so that operators are able to devote more of their attention to DRI. We have used the metric to plan and fly actual paths. We also carried out a small user study that verified that VTTP correlates with subjective human assessment of video. The second area of research explores a new method of detecting GPS spoofing on an unmanned aerial system (UAS) equipped with a camera and a terrain elevation map. Spoofing allows an attacker to remotely tamper with the position, time, and velocity readings output by a GPS receiver. This tampering can throw off the UAS's state estimates, but the optical flow through the camera still depends on the actual movement of the UAS. We develop a method of detecting spoofing by calculating the expected optical flow based on the state estimates and comparing it against the actual optical flow. If the UAS is successfully spoofed to a different location, then the detector can also be triggered by differences in the terrain between where the UAS actually is and where it thinks it is. We tested the spoofing detector in simulation, and found that it works well in some scenarios.
102

Correcting for Patient Breathing Motion in PET Imaging

O'Briain, Teaghan 26 August 2022 (has links)
Positron emission tomography (PET) requires imaging times that last several minutes long. Therefore, when imaging areas that are prone to respiratory motion, blurring effects are often observed. This blurring can impair our ability to use these images for diagnostics purposes as well for treatment planning. While there are methods that are used to account for this effect, they often rely on adjustments to the imaging protocols in the form of longer scan times or subjecting the patient to higher doses of radiation. This dissertation explores an alternative approach that leverages state-of-the-art deep learning techniques to align the PET signal acquired at different points of the breathing motion. This method does not require adjustments to standard clinical protocols; and therefore, is more efficient and/or safer than the most widely adopted approach. To help validate this method, Monte Carlo (MC) simulations were conducted to emulate the PET imaging process, which represent the focus of our first experiment. The next experiment was the development and testing of our motion correction method. A clinical four-ring PET imaging system was modelled using GATE (v. 9.0). To validate the simulations, PET images were acquired of a cylindrical phantom, point source, and image quality phantom with the modeled system and the experimental procedures were also simulated. The simulations were compared against the measurements in terms of their count rates and sensitivity as well as their image uniformity, resolution, recovery coefficients, coefficients of variation, contrast, and background variability. When compared to the measured data, the number of true detections in the MC simulations was within 5%. The scatter fraction was found to be (31.1 ± 1.1)% and (29.8 ± 0.8)% in the measured and simulated scans, respectively. Analyzing the measured and simulated sinograms, the sensitivities were found to be 10.0 cps/kBq and 9.5 cps/kBq, respectively. The fraction of random coincidences were 19% in the measured data and 25% in the simulation. When calculating the image uniformity within the axial slices, the measured image exhibited a uniformity of (0.015 ± 0.005), while the simulated image had a uniformity of (0.029 ± 0.011). In the axial direction, the uniformity was measured to be (0.024 ± 0.006) and (0.040 ± 0.015) for the measured and simulated data, respectively. Comparing the image resolution, an average percentage difference of 2.9% was found between the measurements and simulations. The recovery coefficients calculated in both the measured and simulated images were found to be within the EARL ranges, except for that of the simulation of the smallest sphere. The coefficients of variation for the measured and simulated images were found to be 12% and 13%, respectively. Lastly, the background variability was consistent between the measurements and simulations, while the average percentage difference in the sphere contrasts was found to be 8.8%. The code used to run the GATE simulations and evaluate the described metrics has been made available (https://github.com/teaghan/PET_MonteCarlo). Next, to correct for breathing motion in PET imaging, an interpretable and unsupervised deep learning technique, FlowNet-PET, was constructed. The network was trained to predict the optical flow between two PET frames from different breathing amplitude ranges. As a result, the trained model groups different retrospectively-gated PET images together into a motion-corrected single bin, providing a final image with similar counting statistics as a non-gated image, but without the blurring effects that were initially observed. As a proof-of-concept, FlowNet-PET was applied to anthropomorphic digital phantom data, which provided the possibility to design robust metrics to quantify the corrections. When comparing the predicted optical flows to the ground truths, the median absolute error was found to be smaller than the pixel and slice widths, even for the phantom with a diaphragm movement of 21 mm. The improvements were illustrated by comparing against images without motion and computing the intersection over union (IoU) of the tumors as well as the enclosed activity and coefficient of variation (CoV) within the no-motion tumor volume before and after the corrections were applied. The average relative improvements provided by the network were 54%, 90%, and 76% for the IoU, total activity, and CoV, respectively. The results were then compared against the conventional retrospective phase binning approach. FlowNet-PET achieved similar results as retrospective binning, but only required one sixth of the scan duration. The code and data used for training and analysis has been made publicly available (https://github.com/teaghan/FlowNet_PET). The encouraging results provided by our motion correction method present the opportunity for many possible future applications. For instance, this method can be transferred to clinical patient PET images or applied to alternative imaging modalities that would benefit from similar motion corrections. When applied to clinical PET images, FlowNet-PET would provide the capability of acquiring high quality images without the requirement for either longer scan times or subjecting the patients to higher doses of radiation. Accordingly, the imaging process would likely become more efficient and/or safer, which would be appreciated by both the health care institutions and their patients. / Graduate
103

MonoDepth-vSLAM: A Visual EKF-SLAM using Optical Flow and Monocular Depth Estimation

Dey, Rohit 04 October 2021 (has links)
No description available.
104

Depth From Defocused Motion

Myles, Zarina 01 January 2004 (has links)
Motion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move. We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters. We can handle the situation where a single image has points which have defocused, got sharper or are focally unperturbed. Moreover, our formulation is valid regardless of whether the defocus is due to the image plane being in front of or behind the point of sharp focus.The blur-depth relationship requires a sequence of at least three images taken with the camera moving either towards or away from the object. It can be used to obtain an initial estimate of relative depth using one of several non-linear methods. We demonstrate a solution based on the Extended Kalman Filter in which the measurement equation is the blur-depth relationship. The estimate of relative depth is then used to compute an initial estimate of camera motion parameters. In order to refine depth values, the values of relative depth and camera motion are then input into a second Extended Kalman Filter in which the measurement equations are the discrete motion equations. This set of cascaded Kalman filters can be employed iteratively over a longer sequence of images in order to further refine depth. We conduct several experiments on real scenery in order to demonstrate the range of object shapes that the algorithm can handle. We show that fairly good estimates of depth can be obtained with just three images.
105

Hybrid And Hierarchical Image Registration Techniques

Xu, Dongjiang 01 January 2004 (has links)
A large number of image registration techniques have been developed for various types of sensors and applications, with the aim to improve the accuracy, computational complexity, generality, and robustness. They can be broadly classified into two categories: intensity-based and feature-based methods. The primary drawback of the intensity-based approaches is that it may fail unless the two images are misaligned by a moderate difference in scale, rotation, and translation. In addition, intensity-based methods lack the robustness in the presence of non-spatial distortions due to different imaging conditions between images. In this dissertation, the image registration is formulated as a two-stage hybrid approach combining both an initial matching and a final matching in a coarse-to-fine manner. In the proposed hybrid framework, the initial matching algorithm is applied at the coarsest scale of images, where the approximate transformation parameters could be first estimated. Subsequently, the robust gradient-based estimation algorithm is incorporated into the proposed hybrid approach using a multi-resolution scheme. Several novel and effective initial matching algorithms have been proposed for the first stage. The variations of the intensity characteristics between images may be large and non-uniform because of non-spatial distortions. Therefore, in order to effectively incorporate the gradient-based robust estimation into our proposed framework, one of the fundamental questions should be addressed: what is a good image representation to work with using gradient-based robust estimation under non-spatial distortions. With the initial matching algorithms applied at the highest level of decomposition, the proposed hybrid approach exhibits superior range of convergence. The gradient-based algorithms in the second stage yield a robust solution that precisely registers images with sub-pixel accuracy. A hierarchical iterative searching further enhances the convergence range and rate. The simulation results demonstrated that the proposed techniques provide significant benefits to the performance of image registration.
106

Computational analysis of smile weight distribution across the face for accurate distinction between genuine and posed smiles

Al-dahoud, Ahmad, Ugail, Hassan January 2018 (has links)
Yes / In this paper, we report the results of our recent research into the understanding of the exact distribution of a smile across the face, especially the distinction in the weight distribution of a smile between a genuine and a posed smile. To do this, we have developed a computational framework for the analysis of the dynamic motion of various parts of the face during a facial expression, in particular, for the smile expression. The heart of our dynamic smile analysis framework is the use of optical flow intensity variation across the face during a smile. This can be utilised to efficiently map the dynamic motion of individual regions of the face such as the mouth, cheeks and areas around the eyes. Thus, through our computational framework, we infer the exact distribution of weights of the smile across the face. Further, through the utilisation of two publicly available datasets, namely the CK+ dataset with 83 subjects expressing posed smiles and the MUG dataset with 35 subjects expressing genuine smiles, we show there is a far greater activity or weight distribution around the regions of the eyes in the case of a genuine smile. / Supported in part by the European Union's Horizon 2020 Programme H2020-MSCA-RISE-2017, under the project PDE-GIR with grant number 778035.
107

Fault Diagnosis and Accommodation in Quadrotor Simultaneous Localization and Mapping Systems

Green, Anthony J. 05 June 2023 (has links)
No description available.
108

Markerless Tracking Using Polar Correlation Of Camera Optical Flow

Gupta, Prince 01 January 2010 (has links)
We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how opposing cameras in vision-based inside-looking-out systems can be used for gesture recognition. To demonstrate our approach, we discuss three different algorithms for recovering motion parameters at different levels of complete recovery. We show how optical flow in opposing cameras can be used to recover motion parameters of the multi-camera rig. Experimental results show gesture recognition accuracy of 88.0%, 90.7% and 86.7% for our three techniques, respectively, across a set of 15 gestures.
109

Robust Feature Based Reconstruction Technique to Remove Rain from Video

Santhaseelan, Varun January 2013 (has links)
No description available.
110

Development of a Low-Cost Solution for the Navigation of UAVs in GPS-DeniedEnvironment

Ashraf, Shahrukh January 2016 (has links)
No description available.

Page generated in 0.0725 seconds