111 |
Constraint-Driven Open-World Scene GenerationBorlik, Hunter 01 June 2023 (has links) (PDF)
We introduce an alternative method for open-world scene generation. In this thesis, Graph-based Wave Function Collapse (GWFC) is integrated with Space Colonization Algorithm (SCA) and used to place objects in an unstructured 3D environment. This combined algorithm, Space Colonization Graph-based Wave Function Collapse (SC-GWFC), leverages the constraint-based capabilities of GWFC and the ability of SCA to populate arbitrary 3D volumes. We demonstrate that objects of variable scale can be successfully used with SC-GWFC. Since this algorithm is run in an interactive environment, we demonstrate iterative modifications to a partially complete scene and incorporate PCG into a scene editing process. As part of the implementation, we also introduce our Scene Modeling Application for rendering and editing 3D scenes. This modeling application allows for editing and viewing constraints for our SC-GWFC scene generator. We evaluate the performance characteristics of SC-GWFC in the Scene Modeling Application to demonstrate that SC-GWFC can be used interactively. Through the application, users can specify adjacency requirements for objects, and SC-GWFC will attempt to place objects in patterns that respect these rules. We demonstrate the ability to place up to 5000 items on a terrain using our proposed SC-GWFC technique.
|
112 |
The Evolution of a TechniquePerrone, Nicole January 2008 (has links)
No description available.
|
113 |
Semantic Movie Scene Segmentation Using Bag-of-Words Representationluo, sai 07 December 2017 (has links)
No description available.
|
114 |
Analysis of rehearsal and performance of the role of Hecuba in Euripides' The Trojan WomenCosgrove, Patricia L. January 1984 (has links)
No description available.
|
115 |
The conception and production of the scenery design for <i>A midsummer night's dream</i>Houdyshell, LJ January 1990 (has links)
No description available.
|
116 |
3D Object Detection from ImagesSimonelli, Andrea 28 September 2022 (has links)
Remarkable advancements in the field of Computer Vision, Artificial Intelligence and Machine Learning have led to unprecedented breakthroughs in what
machines are able to achieve. In many tasks such as in Image Classification in fact, they are now capable of even surpassing human performance.
While this is truly outstanding, there are still many tasks in which machines lag far behind. Walking in a room, driving on an highway, grabbing some food
for example. These are all actions that feel natural to us but can be quite unfeasible for them. Such actions require to identify and localize objects in the
environment, effectively building a robust understanding of the scene. Humans easily gain this understanding thanks to their binocular vision, which provides
an high-resolution and continuous stream of information to our brain that efficiently processes it. Unfortunately, things are much different for machines.
With cameras instead of eyes and artificial neural networks instead of a brain, gaining this understanding is still an open problem. In this thesis we will not focus on solving this problem as a whole, but instead delve into a very relevant part of it. We will in fact analyze how to make ma- chines be able to identify and precisely localize objects in the 3D space by relying only on visual input i.e. 3D Object Detection from Images. One of the most complex aspects of Image-based 3D Object Detection is that it inherently requires the solution of many different sub-tasks e.g. the estimation of the object’s distance and its rotation. A first contribution of this thesis is an analysis of how these sub-tasks are usually learned, highlighting a destructivebehavior which limits the overall performance and the proposal of an alternative learning method that avoids it. A second contribution is the discovery of a flaw in the computation of the metric which is widely used in the field, affecting the re-computation of the performance of all published methods and the introduction of a novel un-flawed metric which has now become the official one. A third contribution is focused on one particular sub-task, i.e. estimation of the object’s distance, which is demonstrated to be the most challenging. Thanks to the introduction of a novel approach which normalizes the appearance of objects with respect to their distance, detection performances can be greatly improved. A last contribution of the thesis is the critical analysis of the recently proposed Pseudo-LiDAR methods. Two flaws in their training protocol have been identified and analyzed. On top of this, a novel method able to achieve state-of-the-art in Image-based 3D Object Detection has been developed.
|
117 |
Features identification and tracking for an autonomous ground vehicleNguyen, Chuong Hoang 14 June 2013 (has links)
This thesis attempts to develop features identification and tracking system for an autonomous ground vehicle by focusing on four fundamental tasks: Motion detection, object tracking, scene recognition, and object detection and recognition. For motion detection, we combined the background subtraction method using the mixture of Gaussian models and the optical flow to highlight any moving objects or new entering objects which stayed still. To increase robustness for object tracking result, we used the Kalman filter to combine the tracking method based on the color histogram and the method based on invariant features. For scene recognition, we applied the algorithm Census Transform Histogram (CENTRIST), which is based on Census Transform images of the training data and the Support Vector Machine classifier, to recognize a total of 8 scene categories. Because detecting the horizon is also an important task for many navigation applications, we also performed horizon detection in this thesis. Finally, the deformable parts-based models algorithm was implemented to detect some common objects, such as humans and vehicles. Furthermore, objects were only detected in the area under the horizon to reduce the detecting time and false matching rate. / Master of Science
|
118 |
Measurements of the effect of surface slant on perceived lightnessBloj, Marina, Brainard, D., Maloney, L., Ripamonti, C., Mitha, K., Hauck, R., Greenwald, S. 2009 May 1929 (has links)
No / When a planar object is rotated with respect to a directional light source, the reflected luminance changes. If surface lightness is to be a reliable guide to surface identity, observers must compensate for such changes. To the extent they do, observers are said to be lightness constant. We report data from a lightness matching task that assesses lightness constancy with respect to changes in object slant. On each trial, observers viewed an achromatic standard object and indicated the best match from a palette of 36 grayscale samples. The standard object and the palette were visible simultaneously within an experimental chamber. The chamber illumination was provided from above by a theater stage lamp. The standard objects were uniformly-painted flat cards. Different groups of naïve observers made matches under two sets of instructions. In the Neutral Instructions, observers were asked to match the appearance of the standard and palette sample. In the Paint Instructions, observers were asked to choose the palette sample that was painted the same as the standard. Several broad conclusions may be drawn from the results. First, data for most observers were neither luminance matches nor lightness constant matches. Second, there were large and reliable individual differences. To characterize these, a constancy index was obtained for each observer by comparing how well the data were accounted for by both luminance matching and lightness constancy. The index could take on values between 0 (luminance matching) and 1 (lightness constancy). Individual observer indices ranged between 0.17 and 0.63 with mean 0.40 and median 0.40. An auxiliary slant-matching experiment rules out variation in perceived slant as the source of the individual variability. Third, the effect of instructions was small compared to the inter-observer variability. Implications of the data for models of lightness perception are discussed.
|
119 |
An equivalent illuminant model for the effect of surface slant on perceived lightness.Bloj, Marina, Ripamonti, C., Mitha, K., Hauck, R., Greenwald, S., Brainard, D. January 2004 (has links)
No / In the companion study (C. Ripamonti et al., 2004), we present data that measure the effect of surface slant on perceived lightness. Observers are neither perfectly lightness constant nor luminance matchers, and there is considerable individual variation in performance. This work develops a parametric model that accounts for how each observer¿s lightness matches vary as a function of surface slant. The model is derived from consideration of an inverse optics calculation that could achieve constancy. The inverse optics calculation begins with parameters that describe the illumination geometry. If these parameters match those of the physical scene, the calculation achieves constancy. Deviations in the model¿s parameters from those of the scene predict deviations from constancy. We used numerical search to fit the model to each observer¿s data. The model accounts for the diverse range of results seen in the experimental data in a unified manner, and examination of its parameters allows interpretation of the data that goes beyond what is possible with the raw data alone.
|
120 |
Duggleby Howe, Burial J and the Eastern Yorkshire Club Scene.Gibson, Alex M., Ogden, Alan R. January 2008 (has links)
No
|
Page generated in 0.0257 seconds