Return to search

Model based 3D vision synthesis and analysis for production audit of installations

One of the challenging problems in the aerospace industry is to design an automated 3D vision system that can sense the installation components in an assembly environment and check certain safety constraints are duly respected. This thesis describes a concept application to aid a safety engineer to perform an audit of a production aircraft against safety driven installation requirements such as segregation, proximity, orientation and trajectory. The capability is achieved using the following steps. The initial step is to perform image capture of a product and measurement of distance between datum points within the product with/without reference to a planar surface. This provides the safety engineer a means to perform measurements on a set of captured images of the equipment they are interested in. The next step is to reconstruct the digital model of fabricated product by using multiple captured images to reposition parts according to the actual model. Then, the projection onto the 3D digital reconstruction of the safety related installation constraints, respecting the original intent of the constraints that are defined in the digital mock up is done. The differences between the 3D reconstruction of the actual product and the design time digital mockup of the product are identified. Finally, the differences/non conformances that have a relevance to safety driven installation requirements with reference to the original safety requirement intent are identified. The above steps together give the safety engineer the ability to overlay a digital reconstruction that should be as true to the fabricated product as possible so that they can see how the product conforms or doesn't conform to the safety driven installation requirements. The work has produced a concept demonstrator that will be further developed in future work to address accuracy, work flow and process efficiency. A new depth based segmentation technique GrabcutD which is an improvement to existing Grabcut, a graph cut based segmentation method is proposed. Conventional Grabcut relies only on color information to achieve segmentation. However, in stereo or multiview analysis, there is additional information that could be also used to improve segmentation. Clearly, depth based approaches bear the potential discriminative power of ascertaining whether the object is nearer of farer. We show the usefulness of the approach when stereo information is available and evaluate it using standard datasets against state of the art result.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:667210
Date January 2013
CreatorsVaiapury, Karthikeyan
PublisherQueen Mary, University of London
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://qmro.qmul.ac.uk/xmlui/handle/123456789/8721

Page generated in 0.0019 seconds