• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Computer vision applications on graphics processing units

Ohmer, Julius Fabian January 2007 (has links)
Over the last few years, commodity Graphics Processing Units (GPUs) have evolved from fixed graphics pipeline processors into more flexible and powerful data-parallel processors. These stream processors are capable of sustaining computation rates of greater than ten times that of a single-core CPU. GPUs are inexpensive and are becoming ubiquitous in a wide variety of computer architectures including desktop and laptop computers, PDAs and cell phones. This research works investigates possible ways to use modern GPUs for real-time computer vision and pattern classification tasks. Special attention is paid to algorithms, where the power of the CPU is a limiting factor. This is in particular the case for real-time tracking algorithms on video streams, where many candidate regions must be evaluated at once to allow stable tracking of features. They impose a high computational burdon on sequential processing units such as the CPU. The proposed implementation presented in this thesis is considering standard PC platforms rather than expensive special dedicated hardware to allow a broad variety of users to benefit from powerful computer vision applications. In particular, this thesis includes following topics: 1. First, we present a framework for computer vision on the GPU, which is used as a foundation for the implementation of computer vision methods. 2. We continue with the discussion of GPU-based implementation of Kernel Methods, including Support Vector Machines and Kernel PCA. 3. Finally, we propose GPU-accelerated implementations of two tracking algorithms. The first algorithm uses geometric templates in a gradient vector field. The second algorithm is a color-based approach in a particle filter framework. Both are able to track objects in a video stream. This thesis concludes with a final discussion of the presented methods and will propose directions for further research work. It will also briefly present the features of the next generation of GPUs.
2

Robust subspace estimation via low-rank and sparse decomposition and applications in computer vision

Ebadi, Salehe Erfanian January 2018 (has links)
Recent advances in robust subspace estimation have made dimensionality reduction and noise and outlier suppression an area of interest for research, along with continuous improvements in computer vision applications. Due to the nature of image and video signals that need a high dimensional representation, often storage, processing, transmission, and analysis of such signals is a difficult task. It is therefore desirable to obtain a low-dimensional representation for such signals, and at the same time correct for corruptions, errors, and outliers, so that the signals could be readily used for later processing. Major recent advances in low-rank modelling in this context were initiated by the work of Cand`es et al. [17] where the authors provided a solution for the long-standing problem of decomposing a matrix into low-rank and sparse components in a Robust Principal Component Analysis (RPCA) framework. However, for computer vision applications RPCA is often too complex, and/or may not yield desirable results. The low-rank component obtained by the RPCA has usually an unnecessarily high rank, while in certain tasks lower dimensional representations are required. The RPCA has the ability to robustly estimate noise and outliers and separate them from the low-rank component, by a sparse part. But, it has no mechanism of providing an insight into the structure of the sparse solution, nor a way to further decompose the sparse part into a random noise and a structured sparse component that would be advantageous in many computer vision tasks. As videos signals are usually captured by a camera that is moving, obtaining a low-rank component by RPCA becomes impossible. In this thesis, novel Approximated RPCA algorithms are presented, targeting different shortcomings of the RPCA. The Approximated RPCA was analysed to identify the most time consuming RPCA solutions, and replace them with simpler yet tractable alternative solutions. The proposed method is able to obtain the exact desired rank for the low-rank component while estimating a global transformation to describe camera-induced motion. Furthermore, it is able to decompose the sparse part into a foreground sparse component, and a random noise part that contains no useful information for computer vision processing. The foreground sparse component is obtained by several novel structured sparsity-inducing norms, that better encapsulate the needed pixel structure in visual signals. Moreover, algorithms for reducing complexity of low-rank estimation have been proposed that achieve significant complexity reduction without sacrificing the visual representation of video and image information. The proposed algorithms are applied to several fundamental computer vision tasks, namely, high efficiency video coding, batch image alignment, inpainting, and recovery, video stabilisation, background modelling and foreground segmentation, robust subspace clustering and motion estimation, face recognition, and ultra high definition image and video super-resolution. The algorithms proposed in this thesis including batch image alignment and recovery, background modelling and foreground segmentation, robust subspace clustering and motion segmentation, and ultra high definition image and video super-resolution achieve either state-of-the-art or comparable results to existing methods.
3

Measuring Kinematics and Kinetics Using Computer Vision and Tactile Gloves for Ergonomics Assessments

Guoyang Zhou (9750476) 24 June 2024 (has links)
<p dir="ltr">Measuring human kinematics and kinetics is critical for ergonomists to evaluate ergonomic risks related to physical workloads, which are essential for ensuring workplace health and safety. Human kinematics describes human body postures and movements in 6 degrees of freedom (DOF). In contrast, kinetics describes the external forces acting on the human body, such as the weight of loads being handled. Measuring them in the workplace has remained costly as they require expensive equipment, such as motion capture systems, or are only possible to measure manually, such as measuring the weight through a force gauge. Due to the limitations of existing measurement methods, most ergonomics assessments are conducted in laboratory settings, mainly to evaluate and improve the design of workspaces, production tools, and tasks. Continuous monitoring of workers' ergonomic risks during daily operations has been challenging, yet it is critical for ergonomists to make timely decisions to prevent workplace injuries.</p><p dir="ltr">Motivated by this gap, this dissertation proposed three studies that introduce novel low-cost, minimally intrusive, and automated methods to measure human kinematics and kinetics for ergonomics assessments. Specifically, study 1 proposed ErgoNet, a deep learning and computer vision network that takes a monocular image as input and predicts the absolute 3D human body joint positions and rotations in the camera coordinate system. It achieved a Mean Per Joint Position Error of 10.69 cm and a Mean Per Joint Rotation Error of 13.67 degrees. This study demonstrated the ability to measure 6 DOF joint kinematics for continuous and dynamic ergonomics assessments for biomechanical modeling using just a single camera. </p><p dir="ltr">Studies 2 and 3 showed the potential of using pressure-sensing gloves (i.e., tactile gloves) to predict ergonomics risks in lifting tasks, especially the weight of loads. Study 2 investigated the impacts of different lifting risk factors on the tactile gloves' pressure measurements, demonstrating that the measured pressure significantly correlates with the weight of loads through linear regression analyses. In addition, the lifting height, direction, and hand type were found to significantly impact the measured pressure. However, the results also illustrated that a linear regression model might not be the best solution for using the tactile gloves' data to predict the weight of loads, as the weight of loads could only explain 58 \% of the variance of the measured pressured, according to the R-squared value. Therefore, study 3 proposed using deep learning model techniques, specifically the Convolution Neural Networks, to predict the weight of loads in lifting tasks based on the raw tactile gloves' measurements. The best model in study 3 achieved a mean absolute error of 1.58 kg, representing the most accurate solution for predicting the weight of loads in lifting tasks. </p><p dir="ltr">Overall, the proposed studies introduced novel solutions to measure human kinematics and kinetics. These can significantly reduce the costs needed to conduct ergonomics assessments and assist ergonomists in continuously monitoring or evaluating workers' ergonomics risks in daily operations.</p>

Page generated in 0.1898 seconds