• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Acoustic Analysis of R.E.E.L. Semi-Reveberant Sound Chamber

Elliston, Sean David 2012 May 1900 (has links)
The Riverside Energy Efficiency Laboratory at Texas A&M University conducts sound quality testing for the Home Ventilating Institute. When the Home Ventilating Institute initially established their sound quality test, the semi-reverberant sound chamber to conduct the sound quality tests was built at the Riverside Energy Efficiency Laboratory. The Home Ventilating Institute created a standard to specify the procedure for sound quality testing. This standard contained high consideration for performance, reliability, and accuracy. The standard was based on several ANSI standards for sound testing procedures, sound setup and equipment standards, and sound rating calculations. The Riverside Energy Efficiency Laboratory presently continues sound quality testing for the Home Ventilating Institute using the semi-reverberant sound chamber. The standard has been revised and updated due to developments for better sound quality test result representation. Resourceful data to assist with further developments comes from the semi-reverberant sound chamber's characteristics. This thesis's purpose was to conduct an analysis of the performance for the semi-reverberant sound chamber. The sound chamber's sound transmission loss was determined using a fan source with known sound power across the 24 tested 1/3 octave frequency bands, 50 Hz - 10,000 Hz. The sound pressure was recorded inside the chamber and outside the chamber at the sound source. The sound source was placed at three different locations around the sound chamber. In addition, the sound pressure was measured in real time to study the amount of sound pressure fluctuation and maximum amplitude. The background noise was measured inside the sound chamber for these tests. The sound transmission loss profiles were identical for each location. The lowest two 1/3 octave bands, 50 Hz and 63 Hz, have low transmission losses. The profile jumps up at the following 1/3 octave band and increases with a peak around 1600 Hz before slightly decreasing. The profile of the sound pressure in the time domain showed similar results. The most fluctuation with the greatest peaks was present in the lower 1/3 octave frequency bands, and diminished the higher the 1/3 octave frequency band. Sound sources around the sound chamber can be evaluated to determine whether an impact is possible on the sound quality tests from these results. The impact of modifications to the sound chamber can use the transmission loss values to help determine the expected performance increase.
2

REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

Su, Po-Chang 01 January 2017 (has links)
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer's 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs.

Page generated in 0.0643 seconds