• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 698
  • 169
  • 90
  • 71
  • 64
  • 43
  • 35
  • 24
  • 22
  • 21
  • 18
  • 10
  • 6
  • 6
  • 5
  • Tagged with
  • 1512
  • 144
  • 131
  • 128
  • 124
  • 114
  • 113
  • 96
  • 92
  • 89
  • 82
  • 78
  • 75
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

空間注意力經由深度影響模稜運動知覺 / The effect of spatial attention on multistable motion perception via the depth mechanism

孫華君, Sun, Hua Chun Unknown Date (has links)
Many studies have found that fixating or directing spatial attention to different regions can bias the perception of the Necker cube, but whether this effect of spatial attention is due to attended areas perceived as being closer have yet to be examined. This issue was directly investigated in this study. The stimulus used was the diamond stimulus, containing four occluders and four moving lines that can be perceived as coherent or separate motions. The results of Experiment 1 show that coherent motion was perceived more often under the attending-to-occluders condition than under the attending-to-moving-lines condition, indicating that spatial attention can bias multistable perception. The results of Experiment 2 show that the mean probability of reporting lines behind occluders in small binocular disparities was significantly higher under the attending-to-occluders condition than under the attending-to-lines condition, indicating that spatial attention can make attended areas look slightly closer. The results of Experiments 3 and 4 show that the effect of spatial attention on biasing multistable perception was weakened when there were binocular or monocular depth cues to define the depth relationship between the occluders and the lines. These results are all consistent with the notion that spatial attention can bias multistable perception through affecting depth perception, making attended areas look closer.
102

Edge-aided virtual view rendering for multiview video plus depth

Muddala, Suryanarayana Murthy, Sjöström, Mårten, Olsson, Roger, Tourancheau, Sylvain January 2013 (has links)
Depth-Image-Based Rendering (DIBR) of virtual views is a fundamental method in three dimensional 3-D video applications to produce dierent perspectives from texture and depth information, in particular the multi-viewplus-depth (MVD) format. Artifacts are still present in virtual views as a consequence of imperfect rendering using existing DIBR methods. In this paper, we propose an alternative DIBR method for MVD. In the proposed method we introduce an edge pixel and interpolate pixel values in the virtual view using the actual projected coordinates from two adjacent views, by which cracks and disocclusions are automatically lled. In particular, we propose a method to merge pixel information from two adjacent views in the virtual view before the interpolation; we apply a weighted averaging of projected pixels within the range of one pixel in the virtual view. We compared virtual view images rendered by the proposed method to the corresponding view images rendered by state-of-theart methods. Objective metrics demonstrated an advantage of the proposed method for most investigated media contents. Subjective test results showed preference to dierent methods depending on media content, and the test could not demonstrate a signicant dierence between the proposed method and state-of-the-art methods.
103

Low-Cost Design of a 3D Stereo Synthesizer Using Depth-Image-Based Rendering

Cheng, Ching-Wen 01 September 2011 (has links)
In this thesis, we proposed a low cost stereoscopic image generation hardware using Depth Image Based Rendering (DIBR) method. Due to the unfavorable artifacts produced by the DIBR algorithm, researchers have developed various algorithms to handle the problem. The most common one is to smooth the depth map before rendering. However, pre-processing of the depth map usually generates other artifacts and even degrades the perception of 3D images. In order to avoid these defects, we present a method by modifying the disparity of edges to make the edges of foreground objects on the synthesized virtual images look more natural. In contrast to the high computational complexity and power consumption in previous designs, we propose a method that fills the holes with the mirrored background pixel values next to the holes. Furthermore, unlike previous DIBR methods that usually consist of two phases, image warping and hole filling, in this thesis we present a new DIBR algorithm that combines the operations of image warping and hole filling in one phase so that the total computation time and power consumption are greatly reduced. Experimental results show that the proposed design can generate more natural virtual images for different view angles with shorter computation latency.
104

Användning av markfuktighetskartor för ståndortsanpassad plantering / Use of Depth-to-Water maps for site adapted planting

Jakobsson, Malin January 2015 (has links)
Digital depth-to-water maps can be produced from a digital elevation model (DEM). Then GIS- based algorithms are used to calculate water flows and the depth-to-water index classes dry, fresh, moist and wet. The purpose of this study was to investigate the possibility to use depth- to-water maps for site adapted planting. The results showed that use of depth-to-water maps for site adapted planting, roughly halved the proportion of improperly planted surfaces from an average of 9 % to 4 %. The variation in the values of proper surface decreased and the result became more even.. In addition, more pine than spruce was incorrectly planted. Without soil moisture maps, the proportion of improper pine and spruce was 66 % and 34 % respectively, and with soil moisture maps, the proportion of improper pine and spruce was 55 % and 45 % respectively. This shows that for regenerations planted without the depth-to-water maps, mostly pine was incorrectly planted, but for the regenerations planted with the depth-to-water maps, the proportions were similar for spruce and pine. The conclusion from the results indicated that depth-to-water-maps can improve site adapted planting. By using the maps it is possible to get a good overview of the conditions and terrain variations of the planting sites.
105

Artifact assessment, generation, and enhancement of video halftones

Rehman, Hamood-Ur, Ph. D. 07 February 2011 (has links)
With the advancement of display technology, consumers expect high quality display of image and video data. Many viewers are used to watching video content on high definition television and large screens. However, certain display technologies, such as several of those used in portable electronic books, are limited on resources such as the availability of number of bits per pixel (i.e. the bit-depth). Display of good or even acceptable perceptual quality video on these devices is a hard technical problem that a display designer must solve. Video halftoning reduces the number of represented colors or gray levels for display on devices that are unable to render the video at full bit-depth. Bit-depth reduction results in visible spatial and temporal artifacts. The designer would want to choose the halftoning algorithm that reduces these artifacts while meeting the target platform constraints. These constraints include available bit-depth, spatial resolution, computational power, and desired frame rate. Perceptual quality assessment techniques are useful in comparing different video halftoning algorithms that satisfy the constraints. This dissertation develops a framework for the evaluation of two key temporal artifacts, flicker and dirty-window-effect, in medium frame rate binary video halftones generated from grayscale continuous-tone videos. The possible causes underlying these temporal artifacts are discussed. The framework is based on perceptual criteria and incorporates properties of the human visual system. The framework allows for independent assessment of each of the temporal artifacts. This dissertation presents design of algorithms that generate medium frame rate binary halftone videos. The design of the presented video halftone generation algorithms benefits from the proposed temporal artifact evaluation framework and is geared towards reducing the visibility of temporal artifacts in the generated medium frame rate binary halftone videos. This dissertation compares the relative power consumption associated with several medium frame rate binary halftone videos generated using different video halftone generation algorithms. The presented power performance analysis is generally applicable to bistable display devices. This dissertation develops algorithms to enhance medium frame rate binary halftone videos by reducing flicker. The designed enhancement algorithms reduce flicker while attempting to constrain any resulting increase in perceptual degradation of the spatial quality of the halftone frames. This dissertation develops algorithms to enhance medium frame rate binary halftone videos by reducing dirty-window-effect. The enhancement algorithms reduce dirty-window-effect while attempting to constrain any resulting increase in perceptual degradation of the spatial quality of the halftone frames. Finally, this dissertation proposes design of medium frame rate binary halftone video enhancement algorithms that attempt to reduce a temporal artifact, flicker or dirty-window-effect, under both spatial and temporal quality constraints. Temporal quality control is incorporated by using the temporal artifact assessment framework developed in this dissertation. The incorporation of temporal quality control, in the process of reducing flicker or dirty-window-effect, helps establish a balance between the two temporal artifacts in the enhanced video. At the same time, the spatial quality control attempts to constrain any increase in perceptual degradation of the spatial quality of the enhanced halftone frames. / text
106

Single View Modeling and View Synthesis

Liao, Miao 01 January 2011 (has links)
This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video.
107

Effect Of Roughness On Flow Measurements In Sloping Rectangular Channels With Free Overfall

Firat, Can Ersen 01 February 2004 (has links) (PDF)
The characteristics of the subcritical, critical and supercritical flows at the rectangular free overfall were studied experimentally to obtain a relation between the brink depth and the flow rate. A series of experiments were conducted in a tilting flume with wide range of flow rate and two bed roughness in order to find the relationship between the brink depth, normal depth, channel bed slope and bed roughnesses. An equation was proposed to calculate the flow rate if only the brink depth, roughness, and channel bed slope are known. An alternate iterative solution was offered to calculate discharges if the brink depth and uniform flow depth are known.
108

Perception of motion-in-depth induced motion effects on monocular and binocular cues /

Gampher, John Eric. January 2008 (has links) (PDF)
Thesis (Ph. D.)--University of Alabama at Birmingham, 2008. / Title from PDF title page (viewed Mar. 30, 2010). Additional advisors: Franklin R. Amthor, James E. Cox, Timothy J. Gawne, Rosalyn E. Weller. Includes bibliographical references (p. 104-114).
109

A Novel Fusion Technique for 2D LIDAR and Stereo Camera Data Using Fuzzy Logic for Improved Depth Perception

Saksena, Harsh 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Obstacle detection, avoidance and path finding for autonomous vehicles requires precise information of the vehicle’s system environment for faultless navigation and decision making. As such vision and depth perception sensors have become an integral part of autonomous vehicles in the current research and development of the autonomous industry. The advancements made in vision sensors such as radars, Light Detection And Ranging (LIDAR) sensors and compact high resolution cameras is encouraging, however individual sensors can be prone to error and misinformation due to environmental factors such as scene illumination, object reflectivity and object transparency. The application of sensor fusion in a system, by the utilization of multiple sensors perceiving similar or relatable information over a network, is implemented to provide a more robust and complete system information and minimize the overall perceived error of the system. 3D LIDAR and monocular camera are the most commonly utilized vision sensors for the implementation of sensor fusion. 3D LIDARs boast a high accuracy and resolution for depth capturing for any given environment and have a broad range of applications such as terrain mapping and 3D reconstruction. Despite 3D LIDAR being the superior sensor for depth, the high cost and sensitivity to its environment make it a poor choice for mid-range application such as autonomous rovers, RC cars and robots. 2D LIDARs are more affordable, easily available and have a wider range of applications than 3D LIDARs, making them the more obvious choice for budget projects. The primary objective of this thesis is to implement a smart and robust sensor fusion system using 2D LIDAR and a stereo depth camera to capture depth and color information of an environment. The depth points generated by the LIDAR are fused with the depth map generated by the stereo camera by a Fuzzy system that implements smart fusion and corrects any gaps in the depth information of the stereo camera. The use of Fuzzy system for sensor fusion of 2D LIDAR and stereo camera is a novel approach to the sensor fusion problem and the output of the fuzzy fusion provides higher depth confidence than the individual sensors provide. In this thesis, we will explore the multiple layers of sensor and data fusion that have been applied to the vision system, both on the camera and lidar data individually and in relation to each other. We will go into detail regarding the development and implementation of fuzzy logic based fusion approach, the fuzzification of input data and the method of selection of the fuzzy system for depth specific fusion for the given vision system and how fuzzy logic can be utilized to provide information which is vastly more reliable than the information provided by the camera and LIDAR separately
110

Uživatelské rozhraní založené na zpracování hloubkové mapy / Depth-Based User Interface

Kubica, Peter January 2013 (has links)
Conventional user interfaces are not always the most appropriate option of application controlling. The objective of this work is to study the issue of Kinect sensor data processing and to analyze the possibilities of application controlling through depth sensors. And consequently, using obtained knowledge, to design a user interface for working with multimedia content, which uses Kinect sensor for interaction with the user.

Page generated in 0.0349 seconds