• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Face recognition from video

Harguess, Joshua David 30 January 2012 (has links)
While the area of face recognition has been extensively studied in recent years, it remains a largely open problem, despite what movie and television studios would leave you to believe. Frontal, still face recognition research has seen a lot of success in recent years from any different researchers. However,the accuracy of such systems can be greatly diminished in cases such as increasing the variability of the database,occluding the face, and varying the illumination of the face. Further varying the pose of the face (yaw, pitch, and roll) and the face expression (smile, frown, etc.) adds even more complexity to the face recognition task, such as in the case of face recognition from video. In a more realistic video surveillance setting, a face recognition system should be robust to scale, pose, resolution, and occlusion as well as successfully track the face between frames. Also, a more advanced face recognition system should be able to improve the face recognition result by utilizing the information present in multiple video cameras. We approach the problem of face recognition from video in the following manner. We assume that the training data for the system consists of only still image data, such as passport photos or mugshots in a real-world system. We then transform the problem of face recognition from video to a still face recognition problem. Our research focuses on solutions to detecting, tracking and extracting face information from video frames so that they may be utilized effectively in a still face recognition system. We have developed four novel methods that assist in face recognition from video and multiple cameras. The first uses a patch-based method to handle the face recognition task when only patches, or parts, of the face are seen in a video, such as when occlusion of the face happens often. The second uses multiple cameras to fuse the recognition results of multiple cameras to improve the recognition accuracy. In the third solution, we utilize multiple overlapping video cameras to improve the face tracking result which thus improves the face recognition accuracy of the system. We additionally implement a methodology to detect and handle occlusion so that unwanted information is not used in the tracking algorithm. Finally, we introduce the average-half-face, which is shown to improve the results of still face recognition by utilizing the symmetry of the face. In one attempt to understand the use of the average-half-face in face recognition, an analysis of the effect of face symmetry on face recognition results is shown. / text
2

HIGH-PERFORMANCE FULL-VIEW VISION SYSTEM WITH GUIDANCE SUPPORT OF ACOUSTIC AND MICROWAVE ARRAYS

Clark, Nicholas, Dunne, Fiona, Lee, Michael, Lee, Hua 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / This paper describes the concept of wide-angle coverage optical vision system integrated with guidance support of microwave or acoustical imaging arrays. The objective is to provide the capability of effective high-resolution full-view monitoring and sensing. The optical component, formed by a multi-camera array, is responsible for the main interface with human users. The acoustical and microwave arrays are integrated, allowing the system to function in the event-triggered modality for optimal efficiency. In this paper, the arrays discussed are in circular configurations. With minor modification, the system can also function with linear array configurations.
3

Real-Time Object Motion and 3D Localization from Geometry

Lee, Young Jin January 2014 (has links)
No description available.
4

CMOS IMAGE SENSORS WITH COMPRESSIVE SENSING ACQUISITION

Dadkhah, Mohammadreza January 2013 (has links)
<p>The compressive sensing (CS) paradigm provides an efficient image acquisition technique through simultaneous sensing and compression. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures are required to design cameras suitable for CS imaging.</p> <p>While this work is focused on the hardware implementation of CS encoding for CMOS sensors, the image reconstruction problem of CS is also studied. The energy compaction properties of the image in different domains are exploited to modify conventional reconstruction problems. Experimental results show that the modified methods outperform the 1-norm and TV (total variation) reconstruction algorithms by up to 2.5dB in PSNR.</p> <p>Also, we have designed, fabricated and measured the performance of two real-time and area-efficient implementations of the CS encoding for CMOS imagers. In the first implementation, the idea of active pixel sensor (APS) with an integrator and in-pixel current switches are used to develop a compact, current-mode implementation of CS encoding in analog domain. In another implementation, the conventional three-transistor APS structure and switched capacitor (SC) circuits are exploited to develop the analog, voltage-mode implementation of the CS encoding. With the analog and block-based implementation, the sensing and encoding are performed in the same time interval, thus making a real-time encoding process. The proposed structures are designed and fabricated in 130nm technology. The experimental results confirm the scalability, the functionality of the block read-out, and the validity of the design in making monotonic and appropriate CS measurements.</p> <p>This work also discusses the CS-CMOS sensors for high frame rate CS video coding. The method of multiple-camera with coded exposure video coding is discussed and a new pixel and array structure for hardware implementation of the method is presented.</p> / Doctor of Philosophy (PhD)
5

Optical Synchronization of Time-of-Flight Cameras

Wermke, Felix 23 November 2023 (has links)
Time-of-Flight (ToF)-Kameras erzeugen Tiefenbilder (3D-Bilder), indem sie Infrarotlicht aussenden und die Zeit messen, bis die Reflexion des Lichtes wieder empfangen wird. Durch den Einsatz mehrerer ToF-Kameras können ihre vergleichsweise geringere Auflösungen überwunden, das Sichtfeld vergrößert und Verdeckungen reduziert werden. Der gleichzeitige Betrieb birgt jedoch die Möglichkeit von Störungen, die zu fehlerhaften Tiefenmessungen führen. Das Problem der gegenseitigen Störungen tritt nicht nur bei Mehrkamerasystemen auf, sondern auch wenn mehrere unabhängige ToF-Kameras eingesetzt werden. In dieser Arbeit wird eine neue optische Synchronisation vorgestellt, die keine zusätzliche Hardware oder Infrastruktur erfordert, um ein Zeitmultiplexverfahren (engl. Time-Division Multiple Access, TDMA) für die Anwendung mit ToF-Kameras zu nutzen, um so die Störungen zu vermeiden. Dies ermöglicht es einer Kamera, den Aufnahmeprozess anderer ToF-Kameras zu erkennen und ihre Aufnahmezeiten schnell zu synchronisieren, um störungsfrei zu arbeiten. Anstatt Kabel zur Synchronisation zu benötigen, wird nur die vorhandene Hardware genutzt, um eine optische Synchronisation zu erreichen. Dazu wird die Firmware der Kamera um das Synchronisationsverfahren erweitert. Die optische Synchronisation wurde konzipiert, implementiert und in einem Versuchsaufbau mit drei ToF-Kameras verifiziert. Die Messungen zeigen die Wirksamkeit der vorgeschlagenen optischen Synchronisation. Während der Experimente wurde die Bildrate durch das zusätzliche Synchronisationsverfahren lediglich um etwa 1 Prozent reduziert. / Time-of-Flight (ToF) cameras produce depth images (three-dimensional images) by measuring the time between the emission of infrared light and the reception of its reflection. A setup of multiple ToF cameras may be used to overcome their comparatively low resolution, increase the field of view, and reduce occlusion. However, the simultaneous operation of multiple ToF cameras introduces the possibility of interference resulting in erroneous depth measurements. The problem of interference is not only related to a collaborative multicamera setup but also to multiple ToF cameras operating independently. In this work, a new optical synchronization for ToF cameras is presented, requiring no additional hardware or infrastructure to utilize a time-division multiple access (TDMA) scheme to mitigate interference. It effectively enables a camera to sense the acquisition process of other ToF cameras and rapidly synchronizes its acquisition times to operate without interference. Instead of requiring cables to synchronize, only the existing hardware is utilized to enable an optical synchronization. To achieve this, the camera’s firmware is extended with the synchronization procedure. The optical synchronization has been conceptualized, implemented, and verified with an experimental setup deploying three ToF cameras. The measurements show the efficacy of the proposed optical synchronization. During the experiments, the frame rate was reduced by only about 1% due to the synchronization procedure.

Page generated in 0.0659 seconds