• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Real-time geometric motion blur for a deforming polygonal mesh

Jones, Nathaniel Earl 30 September 2004 (has links)
Motion blur is one important method for increasing the visual quality of real-time applications. This is increasingly true in the area of interactive applications, where designers often seek to add graphical flair or realism to their programs. These applications often have animated characters with a polygonal mesh wrapped around an animated skeleton; and as the skeleton moves the mesh deforms with it. This thesis presents a method for adding a geometric motion blur to a deforming polygonal mesh. The scheme presented tracks an object's motion silhouette, and uses this to create a polygonal mesh. When this mesh is added to the scene, it gives the appearance of a motion blur on a single object or particular character. The method is generic enough to work on nearly any type of moving polygonal model. Examples are given that show how the method could be expanded and how changes could be made to improve its performance.
2

Real-time geometric motion blur for a deforming polygonal mesh

Jones, Nathaniel Earl 30 September 2004 (has links)
Motion blur is one important method for increasing the visual quality of real-time applications. This is increasingly true in the area of interactive applications, where designers often seek to add graphical flair or realism to their programs. These applications often have animated characters with a polygonal mesh wrapped around an animated skeleton; and as the skeleton moves the mesh deforms with it. This thesis presents a method for adding a geometric motion blur to a deforming polygonal mesh. The scheme presented tracks an object's motion silhouette, and uses this to create a polygonal mesh. When this mesh is added to the scene, it gives the appearance of a motion blur on a single object or particular character. The method is generic enough to work on nearly any type of moving polygonal model. Examples are given that show how the method could be expanded and how changes could be made to improve its performance.
3

On the Shifter Hyposthesis for the Elimination of Motion Blur

Fahle, Manfred 01 August 1990 (has links)
Moving objects may stimulate many retinal photoreceptors within the integration time of the receptors without motion blur being experienced. Anderson and vanEssen (1987) suggested that the neuronal representation of retinal images is shifted on its way to the cortex, in an opposite direction to the motion. Thus, the cortical representation of objects would be stationary. I have measured thresholds for two vernier stimuli, moving simultaneously into opposite directions over identical positions. Motion blur for these stimuli is not stronger than with a single moving stimulus, and thresholds can be below a photoreceptor diameter. This result cannot be easily reconciled with the hypothesis of Tshifter circuitsU.
4

Camera Motion Blur And Its Effect On Feature Detectors

Uzer, Ferit 01 September 2010 (has links) (PDF)
Perception, hence the usage of visual sensors is indispensable in mobile and autonomous robotics. Visual sensors such as cameras, rigidly mounted on a robot frame are the most common usage scenario. In this case, the motion of the camera due to the motion of the moving platform as well as the resulting shocks or vibrations causes a number of distortions on video frame sequences. Two most important ones are the frame-to-frame changes of the line-of-sight (LOS) and the presence of motion blur in individual frames. The latter of these two, namely motion blur plays a particularly dominant role in determining the performance of many vision algorithms used in mobile robotics. It is caused by the relative motion between the vision sensor and the scene during the exposure time of the frame. Motion blur is clearly an undesirable phenomenon in computer vision not only because it degrades the quality of images but also causes other feature extraction procedures to degrade or fail. Although there are many studies on feature based tracking, navigation, object recognition algorithms in the computer vision and robotics literature, there is no comprehensive work on the effects of motion blur on different image features and their extraction. In this thesis, a survey of existing models of motion blur and approaches to motion deblurring is presented. We review recent literature on motion blur and deblurring and we focus our attention on motion blur induced degradation of a number of popular feature detectors. We investigate and characterize this degradation using video sequences captured by the vision system of a mobile legged robot platform. Harris Corner detector, Canny Edge detector and Scale Invariant Feature Transform (SIFT) are chosen as the popular feature detectors that are most commonly used for mobile robotics applications. The performance degradation of these feature detectors due to motion blur are categorized to analyze the effect of legged locomotion on feature performance for perception. These analysis results are obtained as a first step towards the stabilization and restoration of video sequences captured by our experimental legged robotic platform and towards the development of motion blur robust vision system.
5

Monitoring 3D vibrations in structures using high resolution blurred imagery

McCarthy, David M. J. January 2016 (has links)
This thesis describes the development of a measurement system for monitoring dynamic tests of civil engineering structures using long exposure motion blurred images, named LEMBI monitoring. Photogrammetry has in the past been used to monitor the static properties of laboratory samples and full-scale structures using multiple image sensors. Detecting vibrations during dynamic structural tests conventionally depends on high-speed cameras, often resulting in lower image resolutions and reduced accuracy. To overcome this limitation, the novel and radically different approach presented in this thesis has been established to take measurements from blurred images in long-exposure photos. The motion of the structure is captured in an individual motion-blurred image, alleviating the dependence on imaging speed. A bespoke algorithm is devised to determine the motion amplitude and direction of each measurement point. Utilising photogrammetric techniques, a model structure s motion with respect to different excitations is captured and its vibration envelope recreated in 3D, using the methodology developed in this thesis. The approach is tested and used to identify changes in the model s vibration response, which in turn can be related to the presence of damage or any other structural modification. The approach is also demonstrated by recording the vibration envelope of larger case studies in 2D, which includes a full-scale bridge structure, confirming the relevance of the proposed measurement approach to real civil engineering case studies. This thesis then assesses the accuracy of the measurement approach in controlled motion tests. Considerations in the design of a survey using the LEMBI approach are discussed and limitations are described. The implications of the newly developed monitoring approach to structural testing are reviewed.
6

Robustness of State-of-the-Art Visual Odometry and SLAM Systems / Robusthet hos moderna Visual Odometry och SLAM system

Mannila, Cassandra January 2023 (has links)
Visual(-Inertial) Odometry (VIO) and Simultaneous Localization and Mapping (SLAM) are hot topics in Computer Vision today. These technologies have various applications, including robotics, autonomous driving, and virtual reality. They may also be valuable in studying human behavior and navigation through head-mounted visual systems. A complication to SLAM and VIO systems could potentially be visual degeneration such as motion blur. This thesis attempts to evaluate the robustness to motion blur of two open-source state-of-the-art VIO and SLAM systems, namely Delayed Marginalization Visual-Inertial Odometry (DM-VIO) and ORB-SLAM3. There are no real-world benchmark datasets with varying amounts of motion blur today. Instead, a semi-synthetic dataset was created with a dynamic trajectory-based motion blurring technique on an existing dataset, TUM VI. The systems were evaluated in two sensor configurations, Monocular and Monocular-Inertial. The systems are evaluated using the Root Mean Square (RMS) of the Absolute Trajectory Error (ATE).  Based on the findings, the visual input highly influences DM-VIO, and performance decreases substantially as motion blur increases, regardless of the sensor configuration. In the Monocular setup, the performance decline significantly going from centimeter precision to decimeter. The performance is slightly improved using the Monocular-Inertial configuration. ORB-SLAM3 is unaffected by motion blur performing on centimeter precision, and there is no significant difference between the sensor configurations. Nevertheless, a stochastic behavior can be noted in ORB-SLAM3 that can cause some sequences to deviate from this. In total, ORB-SLAM3 outperforms DM-VIO on the all sequences in the semi-synthetic datasets created for this thesis. The code used in this thesis is available at GitHub https://github.com/cmannila along with forked repositories of DM-VIO and ORB-SLAM3 / Visual(-Inertial) Odometry (VIO) och Simultaneous Localization and Mapping (SLAM) är av stort intresse inom datorseende (Computer Vision). Dessa system har en variation av tillämpningar såsom robotik, själv-körande bilar och VR (Virtual Reality). En ytterligare potentiell tillämpning är att integrera SLAM/VIO i huvudmonterade system, såsom glasögon, för att kunna studera beteenden och navigering hos bäraren. En komplikation till SLAM och VIO skulle kunna vara en visuell degration i det visuella systemet såsom rörelseoskärpa. Detta examensarbete försöker utvärdera robustheten mot rörelseoskärpa i två tillgängliga state-of-the-art system, DM-VIO (Delayed Marginalization Visual-Inertial Odometry) och ORB-SLAM3. Idag finns det inga tillgängliga dataset som innehåller specifikt varierande mängder rörelseoskärpa. Således, skapades ett semisyntetiskt dataset baserat på ett redan existerande, vid namn TUM VI. Detta gjordes med en dynamisk rendering av rörelseoskärpa enligt en känd rörelsebana erhållen från datasetet. Med denna teknik kunde olika mängder exponeringstid simuleras.  DM-VIO och ORB-SLAM3 utvärderades med två sensor konfigurationer, Monocular (en kamera) och Monokulär-Inertial (en kamera med Inertial Measurement Unit). Det objektiva mått som användes för att jämföra systemen var Root Mean Square av Absolute Trajectory Error i meter. Resultaten i detta arbete visar på att DM-VIO är i hög-grad beroende av den visuella signalen som används, och prestandan minskar avsevärt när rörelseoskärpan ökar, oavsett sensorkonfiguration. När enbart en kamera (Monocular) används minskar prestandan från centimeterprecision till diameter. ORB-SLAM3 påverkas inte av rörelseoskärpa och presterar med centimeterprecision för alla sekvenser. Det kan heller inte påvisas någon signifikant skillnad mellan sensorkonfigurationerna. Trots detta kan ett stokastiskt beteende i ORB-SLAM3 noteras, detta kan ha orsakat vissa sekvenser att bete sig avvikande. I helhet, ORB-SLAM3 överträffar DM-VIO på alla sekvenser i det semisyntetiska datasetet som skapats för detta arbete. Koden som använts i detta arbete finns tillgängligt på GitHub https://github.com/cmannila tillsammans med forkade repository för DM-VIO och ORB-SLAM3.
7

An Investigation Of The Relationship Between Visual Effects And Object Identification Using Eye-tracking

Rosch, Jonathan 01 January 2012 (has links)
The visual content represented on information displays used in training environments prescribe display attributes as brightness, color, contrast, and motion blur, but considerations regarding cognitive processes corresponding to these visual features require further attention in order to optimize the display for training applications. This dissertation describes an empirical study with which information display features, specifically color and motion blur reduction, were investigated to assess their impact in a training scenario involving visual search and threat detection. Presented in this document is a review of the theory and literature describing display technology, its applications to training, and how eye-tracking systems can be used to objectively measure cognitive activity. The experiment required participants to complete a threat identification task, while altering the displays settings beforehand, to assess the utility of the display capabilities. The data obtained led to the conclusion that motion blur had a stronger impact on perceptual load than the addition of color. The increased perceptual load resulted in approximately 8- 10% longer fixation durations for all display conditions and a similar decrease in the number of saccades, but only when motion blur reduction was used. No differences were found in terms of threat location or threat identification accuracy, so it was concluded that the effects of perceptual load were independent of germane cognitive load.
8

Undersökning av tekniker för rörelseoskärpa : En prestandajämförelse av olika tekniker för rörelseoskärpa i scener med statisk miljö / Study of motion blur technics : A performance comparison of different motion blur technics in scenes with a static environment

Åsén, Erik January 2017 (has links)
Fenomenet oskärpa existerar i verkligenheten och i spel, i olika former för olika syften. Olika tekniker har utvecklats för att skapa oskärpa av varierande effekt och till varierande ändamål i spelvärlden. Ett av dessa ändamål är när ett föremål är i rörelse, vilket brukar benämnas rörelseoskärpa. För att skapa rörelseoskärpa finns det en mängd olika tekniker som går att dela in i två huvudgrupper; geometribaserad- och pixelbaserad rörelseoskärpa, beroende på vilken data oskärpan förändrar. För detta arbete valdes en teknik från varje grupp ut. Syftet var att jämföra teknikernas prestandapåverkan på två olika statiska miljöer, nämligen skog och stad. En undersökning gjordes genom att en kamera utförde en automatiserad rörelse från punkt A till punkt B. Först gjordes detta utan någon rörelseoskärpa på båda miljöerna vilket skapade basfallet. Sedan kopplades rörelseoskärpatekniken på och kameran gjorde samma rörelse 30 gånger med varje teknik och per miljö. Syftet var att utvärdera vilken av rörelseoskärpateknikerna som kom närmast basfallets prestanda samt att undersöka vilken av teknikerna som var minst prestandakrävande. Tekniken som visade sig komma närmast basfallen var pixeltekniken i båda miljöerna. Det visade sig dock att geometritekniken kunde konkurrera till viss del i skogsmiljön. Pixeltekniken var dock dominerande i de flesta mätvärdena och dominerade helt en av de utvalda miljöerna, staden. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
9

Spatially Non-Uniform Blur Analysis Based on Wavelet Transform

Zhang, Yi January 2010 (has links)
No description available.
10

Automatic object detection and tracking for eye-tracking analysis

Cederin, Liv, Bremberg, Ulrika January 2023 (has links)
In recent years, eye-tracking technology has gained considerable attention, facilitating analysis of gaze behavior and human visual attention. However, eye-tracking analysis often requires manual annotation on the objects being gazed upon, making quantitative data analysis a difficult and time-consuming process. This thesis explores the area of object detection and object tracking applied on scene camera footage from mobile eye-tracking glasses. We have evaluated the performance of state-of-the-art object detectors and trackers, resulting in an automated pipeline specialized at detecting and tracking objects in scene videos. Motion blur constitutes a significant challenge in moving cameras, complicating tasks such as object detection and tracking. To address this, we explored two approaches. The first involved retraining object detection models on datasets with augmented motion-blurred images, while the second one involved preprocessing the video frames with deblurring techniques. The findings of our research contributes with insights into efficient approaches to optimally detect and track objects in scene camera footage from eye-tracking glasses. Out of the technologies we tested, we found that motion deblurring using DeblurGAN-v2, along with a DINO object detector combined with the StrongSORT tracker, achieved the highest accuracies. Furthermore, we present an annotated dataset consisting of frames from recordings with eye-tracking glasses, that can be utilized for evaluating object detection and tracking performance.

Page generated in 0.0891 seconds