Spelling suggestions: "subject:"depth"" "subject:"epth""
91 |
Reducing Attitude Extremity and Perceived Understanding Through Message Exposure:An Integration of Message Sidedness and the Illusion of Explanatory DepthSchutz, Emily Nicole 24 October 2019 (has links)
No description available.
|
92 |
Corn Emergence Uniformity as Impacted by Planting DepthNemergut, Kyle T. 06 October 2020 (has links)
No description available.
|
93 |
Children's perception of depth in random dot stereogramsDowd, John Myron 01 January 1977 (has links) (PDF)
No description available.
|
94 |
Mapping Peat Depth Using Remote Sensing and Machine Learning to Improve Peat Smouldering Vulnerability PredictionSherwood, Emma January 2023 (has links)
Peat is an accumulation of soil formed from partially decomposed organic matter. Peat can burn, especially in hot, dry weather which is happening more often due to climate change; smouldering releases stored carbon to the atmosphere. Peat that has higher organic bulk density and lower moisture content is more vulnerable to fire: it will burn more severely (more deeply) if ignited. Shallower peat is less able to retain moisture during droughts and is therefore likely more vulnerable to fire; however, mapping peat depths at high spatial resolution is expensive or requires extensive fieldwork. This project uses remote sensing in combination with machine learning to estimate peat depth across a peatland and rock barren landscape. A Random Forest model was used to map peat depths across the landscape at a 1 m spatial resolution using LiDAR data and orthophotography. The resulting map was able to predict peat depths (R2 = 0.73, MAE = 28 cm) and showed that the peat depths which are especially vulnerable to high severity fire are distributed in numerous small patches across the landscape. This project also examined peat bulk density and found that the Von Post scale for peat decomposition can be used as a field method for estimating bulk density (R2 = 0.71). In addition, in this landscape, peat bulk densities at the same depth (within the top 45 cm) are higher in shallower peat because in shallower peat, more decomposed peat was found closer to the surface, and because peat with high mineral content was found close to the bedrock or mineral soil. The findings of this project will be valuable for wildfire managers to determine which areas on the landscape are most vulnerable to fire, allowing them to mobilize resources more rapidly for wildfire suppression. / Thesis / Master of Science (MSc) / Peat is organic soil made from decomposing plant material. Peat can burn, especially in the hot, dry weather which is happening more often due to climate change. Dense, dry peat is more vulnerable to fire: it will burn more deeply. Because it is known that areas with deeper peat can retain moisture better, peat depth can be used as a proxy for vulnerability to fire. Since peat depth is expensive and time consuming to map directly, remotely sensed data such as aerial imagery was used in a model to predict peat depths. The model was able to predict peat depths and displayed that the most vulnerable areas are scattered across the landscape in small patches. This project also found that denser peat is found farther from the surface in deeper peat areas, further supporting the use of peat depth as a proxy for vulnerability to smouldering.
|
95 |
Interactive Depth-Aware Effects for Stereo Image EditingAbbott, Joshua E. 24 June 2013 (has links) (PDF)
This thesis introduces methods for adding user-guided depth-aware effects to images captured with a consumer-grade stereo camera with minimal user interaction. In particular, we present methods for highlighted depth-of-field, haze, depth-of-field, and image relighting. Unlike many prior methods for adding such effects, we do not assume prior scene models or require extensive user guidance to create such models, nor do we assume multiple input images. We also do not require specialized camera rigs or other equipment such as light-field camera arrays, active lighting, etc. Instead, we use only an easily portable and affordable consumer-grade stereo camera. The depth is calculated from a stereo image pair using an extended version of PatchMatch Stereo designed to compute not only image disparities but also normals for visible surfaces. We also introduce a pipeline for rendering multiple effects in the order they would occur physically. Each can be added, removed, or adjusted in the pipeline without having to reapply subsequent effects. Individually or in combination, these effects can be used to enhance the sense of depth or structure in images and provide increased artistic control. Our interface also allows editing the stereo pair together in a fashion that preserves stereo consistency, or the effects can be applied to a single image only, thus leveraging the advantages of stereo acquisition even to produce a single photograph.
|
96 |
An Extension to Östasiatiska museet ----- transforming from traditional Asian wood structure languageDu, Han January 2018 (has links)
There is always a strong character of OKU( which means depth) inside the traditional Japanese wood construction, which fascinating me a lot. My thesis is about study the structure and spatial language and translating traditional language to formulate a space with that quality of depth/ oku.Site is been choose in far east museum in Skeppsholmen, and my the- sis is working on an addition to the original far east museum. As it was ropewalk manufactory early days, long and narrow, it doesn’t have its own identification, that’s why nowadays it’s more like a belonging facility attached to modern museum and being ignored by most citizens.
|
97 |
An exploratory study of the role of binocular vision in performance of dynamic movement in tennis skills /Herrold, Judith Ann January 1968 (has links)
No description available.
|
98 |
Learning Unsupervised Depth Estimation, from Stereo to Monocular ImagesPilzer, Andrea 22 June 2020 (has links)
In order to interact with the real world, humans need to perform several tasks such as object detection, pose estimation, motion estimation and distance estimation. These tasks are all part of scene understanding and are fundamental tasks of computer vision. Depth estimation received unprecedented attention from the research community in recent years due to the growing interest in its practical applications (ie robotics, autonomous driving, etc.) and the performance improvements achieved with deep learning. In fact, the applications expanded from the more traditional tasks such as robotics to new fields such as autonomous driving, augmented reality devices and smartphones applications. This is due to several factors. First, with the increased availability of training data, bigger and bigger datasets were collected. Second, deep learning frameworks running on graphical cards exponentially increased the data processing capabilities allowing for higher precision deep convolutional networks, ConvNets, to be trained. Third, researchers applied unsupervised optimization objectives to ConvNets overcoming the hurdle of collecting expensive ground truth and fully exploiting the abundance of images available in datasets.
This thesis addresses several proposals and their benefits for unsupervised depth estimation, i.e., (i) learning from resynthesized data, (ii) adversarial learning, (iii) coupling generator and discriminator losses for collaborative training, and (iv) self-improvement ability of the learned model. For the first two points, we developed a binocular stereo unsupervised depth estimation model that uses reconstructed data as an additional self-constraint during training. In addition to that, adversarial learning improves the quality of the reconstructions, further increasing the performance of the model. The third point is inspired by scene understanding as a structured task. A generator and a discriminator joining their efforts in a structured way improve the quality of the estimations. Our intuition may sound counterintuitive when cast in the general framework of adversarial learning. However, in our experiments we demonstrate the effectiveness of the proposed approach. Finally, self-improvement is inspired by estimation refinement, a widespread practice in dense reconstruction tasks like depth estimation. We devise a monocular unsupervised depth estimation approach, which measures the reconstruction errors in an unsupervised way, to produce a refinement of the depth predictions. Furthermore, we apply knowledge distillation to improve the student ConvNet with the knowledge of the teacher ConvNet that has access to the errors.
|
99 |
Evaluation of GLO: a Solar Occultation Instrument for Measuring Atmospheric Trace Species on CubeSat MissionsRosich, Garrett Kyle 09 June 2017 (has links)
CubeSats provide an inexpensive means for space-based research. However, optimal mission design depends on minimizing payload size and power. This thesis investigates the GLO (GFCR (Gas Filter Correlation Radiometry) Limb Occultation) prototype, a new small-form-factor design that enables sub-kilometer resolution of the vertical profile of atmospheric trace species to determine radiative influences. This technology improves SWAP (Size, Weight, And Power) over heritage SOFIE and HALOE instruments and provides a cost-effective alternative for solar occultation limb monitoring.
A python script was developed to analyze solar intensity through GLO telescope channels. Non-uniform aerosol images used a peak intensity algorithm compared to the edge detection function designed for GFCR channels. Scaling corrections were made for beam splitter inaccuracy and SNR was characterized for frame collection. Different cameras were tested to weigh accuracy versus cost of a camera baffle. Using the Langley plot method, solar intensity versus changes in the solar zenith angle were measured for extrapolation of optical depths. AERONET, a network of ground-based sun photometers measuring atmospheric aerosols, was used for aerosol optical depth validation. Spectral Calculator transmission data allowed for GFCR vacuum channel comparison, gas cell spectral analysis, and gas cell to vacuum channel optical depth examination. Ground testing provided promising results with the low-cost prototype. It will be further evaluated through a balloon flight demonstration using a flight-ready GLO instrument. Additionally, analysis for the DUSTIE mission is planned and simulated using STK and Matlab. This includes CubeSat bus selection, orbit analysis for occultation occurrences, power budgeting, and communication capabilities. / Master of Science / Cube Satellites (CubeSats) provide an inexpensive means for space-based research. However, optimal mission design depends on minimizing payload size and power. This thesis investigates the GLO (GFCR (Gas Filter Correlation Radiometry) Limb Occultation) prototype. This technology will determine the influences on the energy balance between the Earth and atmosphere due to aerosol and gas particle concentrations. This is implemented with improved SWAP (Size, Weight, And Power) compared to previously flown instruments. Scaling corrections were made for beam splitter inaccuracy and the Signal-to-Noise Ratio (SNR) was characterized for frame collection for the demonstration GLO instrument. The changing solar intensity as the sun moved across the sky was measured to infer aerosol and gas concentrations in the atmosphere. A network of ground-based sun photometers measuring atmospheric aerosols was used to validate aerosol concentration measurements. GLO vacuum channel measurements and gas cell properties were compared to transmission simulations for accuracy. Ground testing provided promising results with the low-cost prototype. It will be further evaluated through a balloon flight demonstration using a flight-ready GLO instrument. Additionally, analysis for the Dust Sounder and Temperature Imager Experiment (DUSTIE) mission is planned and simulated using STK and Matlab. This includes CubeSat bus selection, orbit analysis for occultation occurrences, power budgeting, and communication capabilities.
|
100 |
Visual space attention in three-dimensional spaceTucker, Andrew James, n/a January 2006 (has links)
Current models of visual spatial attention are based on the extent to which attention can
be allocated in 2-dimensional displays. The distribution of attention in 3-dimensional
space has received little consideration. A series of experiments were devised to explore
the apparent inconsistencies in the literature pertaining to the allocation of spatial
attention in the third dimension. A review of the literature attributed these
inconsistencies to differences and limitations in the various methodologies employed, in
addition to the use of differing attentional paradigms. An initial aim of this thesis was
to develop a highly controlled novel adaptation of the conventional robust covert
orienting of visual attention task (COVAT) in depth defined by either binocular
(stereoscopic) or monocular cues. The results indicated that attentional selection in the
COVAT is not allocated within a 3-dimensional representation of space. Consequently,
an alternative measure of spatial attention in depth, the overlay interference task, was
successfully validated in a different stereoscopic depth environment and then
manipulated to further examine the allocation of attention in depth. Findings from the
overlay interference experiments indicated that attentional selection is based on a
representation that includes depth information, but only when an additional feature can
aid 3D selection. Collectively, the results suggest a dissociation between two paradigms
that are both purported to be measures of spatial attention. There appears to be a further
dissociation between 2-dimensional and 3-dimensional attentional selection in both
paradigms for different reasons. These behavioural results, combined with recent
electrophysiological evidence suggest that the temporal constraints of the 3D COVAT
paradigm result in early selection based predominantly on retinotopic spatial
coordinates prior to the complete construction of a 3-dimensional representation. Task
requirements of the 3D overlay interference paradigm, on the other hand, while not
being restricted by temporal constraints, demand that attentional selection occurs later,
after the construction of a 3-dimensional representation, but only with the guidance of a
secondary feature. Regardless of whether attentional selection occurs early or late,
however, some component of selection appears to be based on viewer-centred spatial
coordinates.
|
Page generated in 0.0238 seconds