101 |
Automated 3D vision-based tracking of construction entitiesPark, Man-Woo 21 August 2012 (has links)
In construction sites, tracking project-related entities such as construction equipment, materials, and personnel provides useful information for productivity measurement, progress monitoring, on-site safety enhancement, and activity sequence analysis. Radio frequency technologies such as Global Positioning Systems (GPS), Radio Frequency Identification (RFID) and Ultra Wide Band (UWB) are commonly used for this purpose. However, on large-scale congested sites, deploying, maintaining and removing such systems can be costly and time-consuming because radio frequency technologies require tagging each entity to track. In addition, privacy issues can arise from tagging construction workers, which often limits the usability of these technologies on construction sites. A vision-based approach that can track moving objects in camera views can resolve these problems.
The purpose of this research is to investigate the vision-based tracking system that holds promise to overcome the limitations of existing radio frequency technologies for large-scale, congested sites. The proposed method use videos from static cameras. Stereo camera system is employed for tracking of construction entities in 3D. Once the cameras are fixed on the site, intrinsic and extrinsic camera parameters are discovered through camera calibration. The method automatically detects and tracks interested objects such as workers and equipment in each camera view, which generates 2D pixel coordinates of tracked objects. The 2D pixel coordinates are converted to 3D real-world coordinates based on calibration. The method proposed in this research was implemented in .NET Framework 4.0 environment, and tested on the real videos of construction sites. The test results indicated that the methods could locate construction entities with accuracy comparable to GPS.
|
102 |
High quality integrated silicon nitride nanophotonic structures for visible light applicationsShah Hosseini, Ehsan 16 May 2011 (has links)
High quality nanophotonic structures fabricated on silicon nitride substrates and operating in the visible range of the spectrum are investigated. As most biological sensing applications, such as Raman and fluorescence sensing, require visible light pumping and analysis, extending the nanophotonics concepts to the visible range is essential. Traditionally, CMOS compatible processes are used to make compact silicon nanophotonics structures. While the high index contrast of silicon on insulator (SOI) wafers offer a high integration capability, the high absorption loss of silicon renders it unusable in the visible range. In this research high quality factor microdisk and photonic crystal resonators and high resolution arrayed waveguide grating and superprism spectrometers are fabricated and characterized in the visible range and integrated with fluidic structures and their application in biosensing and athermal operations is investigated.
|
103 |
Data structures and algorithms for real-time ray tracing at the University of Texas at AustinHunt, Warren Andrew, 1983- 27 September 2012 (has links)
Modern rendering systems require fast and efficient acceleration structures in order to compute visibility in real time. I present several novel data structures and algorithms for computing visibility with high performance. In particular, I present two algorithms for improving heuristic based acceleration structure build. These algorithms, when used in a demand driven way, have been shown to improve build performance by up to two orders of magnitude. Additionally, I introduce ray tracing in perspective transformed space. I demonstrate that ray tracing in this space can significantly improve visibility performance for near-common origin rays such as eye and shadow rays. I use these data structures and algorithms to support a key hypothesis of this dissertation: “There is no silver bullet for solving the visibility problem; many different acceleration structures will be required to achieve the highest performance.” Specialized acceleration structures provide significantly better performance than generic ones and building many specialized structures requires high performance build techniques. Additionally, I present an optimization-based taxonomy for classifying acceleration structures and algorithms in order to identify which optimizations provide the largest improvement in performance. This taxonomy also provides context for the algorithms I present. Finally, I present several novel cost metrics (and a correction to an existing cost metric) to improve visibility performance when using metric based acceleration structures. / text
|
104 |
Constructing a language model based on data mining techniques for a Chinese character recognition systemChen, Yong, 陳勇 January 2004 (has links)
published_or_final_version / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
|
105 |
Low-temperature-grown InGaAs quantum wells for optical device applicationsJuodawlkis, Paul W. 05 1900 (has links)
No description available.
|
106 |
Optical music recognition using projectionsFujinaga, Ichiro January 1988 (has links)
No description available.
|
107 |
A prototype investigation of a multi-GHz multi-channel analog transient recorder /Kohnen, William. January 1986 (has links)
No description available.
|
108 |
Aircraft position estimation using lenticular sheet generated optical patternsBarbieri, Nicholas P. 24 January 2008 (has links)
Lenticular sheets can be used with machine vision to determine
relative position between two objects. If a lenticular sheet of a
given period is mounted above periodically spaced lines sharing the
same period, lines will appear on the lenticular sheet which
translate along the lenticular sheet in a direction perpendicular to
observer motion. This behavior is modeled theoretically and tested
experimentally, and found to be linear within a finite range.
By arranging two lenticular sheets, configured as described above,
in a mutually orthogonal configuration on a flat surface, the lines
that appear on the lenticular sheets can be used by a camera to
estimate its position relative to the lenticular sheets. Two such
devices were constructed to test the principle, and machine vision
code was developed to ascertain position using these devices.
Machine vision code was found to reliably provide angular position
of a camera within $1.4^circ$ through experimental testing.
The optical patterns that appear on the lenticular sheet surfaces
are monitored using a digital camera. The resulting images are
analyzed using visual C++ in conjunction with the OpenCV library and
the appropriate camera device drivers. The system is able to
estimate height, yaw, and position relative to the optical target in
real time and without the need for a prior reference.
|
109 |
Simultaneous object detection and segmentation using top-down and bottom-up processingSharma, Vinay, January 2008 (has links)
Thesis (Ph. D.)--Ohio State University, 2008. / Title from first page of PDF file. Includes bibliographical references (p. 299-207).
|
110 |
A biologically inspired optical flow system for motion detection and object identificationRijhwani, Vishal. January 2007 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2007. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 7, 2008) Includes bibliographical references.
|
Page generated in 0.129 seconds