• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 9
  • 7
  • 7
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 104
  • 32
  • 24
  • 21
  • 20
  • 18
  • 15
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

DIGITAL VISARAVLÄSNING

Åberg, Andreas, Åström, Viktor January 2021 (has links)
I modern industrimiljö finns fortfarande en stor mängd analoga visarinstrument. Det är önskvärt att övervaka dessa instrument digitalt vilket medför att kontroll av mätdata kan göras utan att personal behöver vara på plats. På marknaden finns idag ingen aplikation som är utvecklad för att uppfylla denna funktion.  Detta examensarbete har undersökt metoder för hur en analog visares värde ska läsas av digitalt och utvecklat en prototyp som kan utföra uppgiften.  Prototypen utvecklades med hjälp av datorseende algoritmer för att läsa av den analoga visarens värde. Algoritmerna för datorseende implementerades på en Raspberry Pi4 Model B och en kamera, Rasperry Pi Kameramodul V2. Prototypen som utvecklades uppfyller de funktioner som efterfrågades, och uppnåde en noggranhet på 0.97% +- 0.75 av det procentuella uppmätta värdet hos en analog visares fulla mätspann med en upplösning på 2.5%
42

Image Blur Detection with Two-Dimensional Haar Wavelet Transform

Andhavarapu, Sarat Kiran 01 August 2015 (has links)
Efficient detection of image blur and its extent is an open research problem in computer vision. Image blur has a negative impact on image quality. Blur is introduced into images due to various factors including limited contrast, improper exposure time or unstable device handling. Toward this end, an algorithm is presented for image blur detection with the use of Two-Dimensional Haar Wavelet transform (2D HWT). The algorithm is experimentally compared with two other image blur detection algorithms frequently cited in the literature. When evaluated over a sample of images, the algorithm performed on par or better than the two other blur detection algorithms.
43

Estimation of Defocus Blur in Virtual Environments Comparing Graph Cuts and Convolutional Neural Network

Chowdhury, Prodipto 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Depth estimation is one of the most important problems in computer vision. It has attracted a lot of attention because it has applications in many areas, such as robotics, VR and AR, self-driving cars etc. Using the defocus blur of a camera lens is one of the methods of depth estimation. In this thesis, we have researched this technique in virtual environments. Virtual datasets have been created for this purpose. In this research, we have applied graph cuts and convolutional neural network (DfD-net) to estimate depth from defocus blur using a natural (Middlebury) and a virtual (Maya) dataset. Graph Cuts showed similar performance for both natural and virtual datasets in terms of NMAE and NRMSE. However, with regard to SSIM, the performance of graph cuts is 4% better for Middlebury compared to Maya. We have trained the DfD-net using the natural and the virtual dataset and then combining both datasets. The network trained by the virtual dataset performed best for both datasets. The performance of graph-cuts and DfD-net have been compared. Graph-Cuts performance is 7% better than DfD-Net in terms of SSIM for Middlebury images. For Maya images, DfD-Net outperforms Graph-Cuts by 2%. With regard to NRMSE, Graph-Cuts and DfD-net shows similar performance for Maya images. For Middlebury images, Graph-cuts is 1.8% better. The algorithms show no difference in performance in terms of NMAE. The time DfD-net takes to generate depth maps compared to graph cuts is 500 times less for Maya images and 200 times less for Middlebury images.
44

Off-resonance correction for magnetic resonance imaging with spiral trajectories

Nylund, Andreas January 2014 (has links)
The procedure of cardiographic magnetic resonance imaging requires patients to hold their breath for up to twenty seconds, creating an uncomfortable situation for many patients. It is proposed that an acquisition scheme using spiral trajectories is preferable due to their much shorter total scan time; however, spiral trajectories suffer from a blurring effect caused by off-resonance frequencies in the image area. There are several methods for reconstructing images with reduced blur and Conjugate Phase Reconstruction has been chosen as a method for implementation into Matlab-script for evaluation in regards to image reconstruction quality and computation time. This method finds a conjugate to the off-resonance from a field map to demodulate the image and an algorithm for frequency‑segmented Conjugate Phase Reconstruction is implemented along with an improvement called Multi-frequency Interpolation. The implementation is tested through simulation of spiral magnetic resonance imaging using a Shepp‑Logan phantom. Different off-resonance frequencies and field maps are used to provide a broad view of the functionality of the code. The two algorithms are then compared to each other in terms of computation speed and image quality. It is concluded that this implementation might reconstruct images well but that further testing on actual scan sequences is required to determine the usefulness. The Multi-frequency Interpolation algorithm yields images that are not useful in a clinical context. Further study of other methods not requiring a field map is suggested for comparison.
45

Robustness of State-of-the-Art Visual Odometry and SLAM Systems / Robusthet hos moderna Visual Odometry och SLAM system

Mannila, Cassandra January 2023 (has links)
Visual(-Inertial) Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) are hot topics in Computer Vision today. These technologies have various applications, including robotics, autonomous driving, and virtual reality. They may also be valuable in studying human behavior and navigation through head-mounted visual systems. A complication to SLAM and VIO systems could potentially be visual degeneration such as motion blur. This thesis attempts to evaluate the robustness to motion blur of two open-source state-of-the-art VIO and SLAM systems, namely Delayed Marginalization Visual-Inertial Odometry (DM-VIO) and ORB-SLAM3. There are no real-world benchmark datasets with varying amounts of motion blur today. Instead, a semi-synthetic dataset was created with a dynamic trajectory-based motion blurring technique on an existing dataset, TUM VI. The systems were evaluated in two sensor configurations, Monocular and Monocular-Inertial. The systems are evaluated using the Root Mean Square (RMS) of the Absolute Trajectory Error (ATE).  Based on the findings, the visual input highly influences DM-VIO, and performance decreases substantially as motion blur increases, regardless of the sensor configuration. In the Monocular setup, the performance decline significantly going from centimeter precision to decimeter. The performance is slightly improved using the Monocular-Inertial configuration. ORB-SLAM3 is unaffected by motion blur performing on centimeter precision, and there is no significant difference between the sensor configurations. Nevertheless, a stochastic behavior can be noted in ORB-SLAM3 that can cause some sequences to deviate from this. In total, ORB-SLAM3 outperforms DM-VIO on the all sequences in the semi-synthetic datasets created for this thesis. The code used in this thesis is available at GitHub https://github.com/cmannila along with forked repositories of DM-VIO and ORB-SLAM3 / Visual(-Inertial) Odometry (VIO) och Simultaneous Localization and Mapping (SLAM) är av stort intresse inom datorseende (Computer Vision). Dessa system har en variation av tillämpningar såsom robotik, själv-körande bilar och VR (Virtual Reality). En ytterligare potentiell tillämpning är att integrera SLAM/VIO i huvudmonterade system, såsom glasögon, för att kunna studera beteenden och navigering hos bäraren. En komplikation till SLAM och VIO skulle kunna vara en visuell degration i det visuella systemet såsom rörelseoskärpa. Detta examensarbete försöker utvärdera robustheten mot rörelseoskärpa i två tillgängliga state-of-the-art system, DM-VIO (Delayed Marginalization Visual-Inertial Odometry) och ORB-SLAM3. Idag finns det inga tillgängliga dataset som innehåller specifikt varierande mängder rörelseoskärpa. Således, skapades ett semisyntetiskt dataset baserat på ett redan existerande, vid namn TUM VI. Detta gjordes med en dynamisk rendering av rörelseoskärpa enligt en känd rörelsebana erhållen från datasetet. Med denna teknik kunde olika mängder exponeringstid simuleras.  DM-VIO och ORB-SLAM3 utvärderades med två sensor konfigurationer, Monocular (en kamera) och Monokulär-Inertial (en kamera med Inertial Measurement Unit). Det objektiva mått som användes för att jämföra systemen var Root Mean Square av Absolute Trajectory Error i meter. Resultaten i detta arbete visar på att DM-VIO är i hög-grad beroende av den visuella signalen som används, och prestandan minskar avsevärt när rörelseoskärpan ökar, oavsett sensorkonfiguration. När enbart en kamera (Monocular) används minskar prestandan från centimeterprecision till diameter. ORB-SLAM3 påverkas inte av rörelseoskärpa och presterar med centimeterprecision för alla sekvenser. Det kan heller inte påvisas någon signifikant skillnad mellan sensorkonfigurationerna. Trots detta kan ett stokastiskt beteende i ORB-SLAM3 noteras, detta kan ha orsakat vissa sekvenser att bete sig avvikande. I helhet, ORB-SLAM3 överträffar DM-VIO på alla sekvenser i det semisyntetiska datasetet som skapats för detta arbete. Koden som använts i detta arbete finns tillgängligt på GitHub https://github.com/cmannila tillsammans med forkade repository för DM-VIO och ORB-SLAM3.
46

A Simple Second Derivative Based Blur Estimation Technique

Ghosh Roy, Gourab 22 August 2013 (has links)
No description available.
47

Immersed in Display: Blurring Boundaries in Architecture

Carneiro Brandao Pereira, Melina 14 October 2013 (has links)
No description available.
48

Reducing Image Artifacts in Motion Blur Prevention

Zixun Yu (15354811) 27 April 2023 (has links)
<p>Motion blur is a form of image quality degradation, showing as content in the image smearing and not looking sharp. It is usually seen in photography due to relative motion between the camera and the scene (either camera moves or objects in the scene move). It is also seen in human vision systems, primarily on digital displays.</p> <p><br></p> <p>It is often desired to remove motion blurriness from images. Numerous works have been put into reducing motion blur <em>after</em> the image has been formed, e.g., for camera-captured ones. Unlike post-processing methods, we take the approach to prevent/minimize motion blur for both human and camera observation by pre-processing the source image. The pre-processed images are supposed to look sharp upon blurring. Note that, only pre-processing methods can deal with human-observed blurriness since the imagery can't be modified after it is formed on the retina.</p> <p><br></p> <p>Pre-processing methods face more fundamental challenges than post-processing ones. A problem inherent to such methods is the appearance of ringing artifacts which are intensity oscillations reducing the quality of the observed image. We found that these ringing artifacts have a fundamental cause rooted in the blur kernel. The blur kernel usually have very low amplitudes in some frequencies, significantly attenuating the signal intensity in these frequencies when it convolves an image. Pre-processing methods can usually reconstruct the targeted image to the observer but inevitably lose energy in those frequencies, appearing as artifacts. To address the artifact issue, we present a few approaches: (a) aligning the image content and the kernel in the frequency domain, and (b) redistributing their intensity variations elsewhere in the image. We demonstrate the effectiveness of our method in a working prototype, in simulation, and with a user study.</p>
49

Depth From Defocused Motion

Myles, Zarina 01 January 2004 (has links)
Motion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move. We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters. We can handle the situation where a single image has points which have defocused, got sharper or are focally unperturbed. Moreover, our formulation is valid regardless of whether the defocus is due to the image plane being in front of or behind the point of sharp focus.The blur-depth relationship requires a sequence of at least three images taken with the camera moving either towards or away from the object. It can be used to obtain an initial estimate of relative depth using one of several non-linear methods. We demonstrate a solution based on the Extended Kalman Filter in which the measurement equation is the blur-depth relationship. The estimate of relative depth is then used to compute an initial estimate of camera motion parameters. In order to refine depth values, the values of relative depth and camera motion are then input into a second Extended Kalman Filter in which the measurement equations are the discrete motion equations. This set of cascaded Kalman filters can be employed iteratively over a longer sequence of images in order to further refine depth. We conduct several experiments on real scenery in order to demonstrate the range of object shapes that the algorithm can handle. We show that fairly good estimates of depth can be obtained with just three images.
50

An Investigation Of The Relationship Between Visual Effects And Object Identification Using Eye-tracking

Rosch, Jonathan 01 January 2012 (has links)
The visual content represented on information displays used in training environments prescribe display attributes as brightness, color, contrast, and motion blur, but considerations regarding cognitive processes corresponding to these visual features require further attention in order to optimize the display for training applications. This dissertation describes an empirical study with which information display features, specifically color and motion blur reduction, were investigated to assess their impact in a training scenario involving visual search and threat detection. Presented in this document is a review of the theory and literature describing display technology, its applications to training, and how eye-tracking systems can be used to objectively measure cognitive activity. The experiment required participants to complete a threat identification task, while altering the displays settings beforehand, to assess the utility of the display capabilities. The data obtained led to the conclusion that motion blur had a stronger impact on perceptual load than the addition of color. The increased perceptual load resulted in approximately 8- 10% longer fixation durations for all display conditions and a similar decrease in the number of saccades, but only when motion blur reduction was used. No differences were found in terms of threat location or threat identification accuracy, so it was concluded that the effects of perceptual load were independent of germane cognitive load.

Page generated in 0.1404 seconds