• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improved monocular videogrammetry for generating 3D dense point clouds of built infrastructure

Rashidi, Abbas 27 August 2014 (has links)
Videogrammetry is an affordable and easy-to-use technology for spatial 3D scene recovery. When applied to the civil engineering domain, a number of issues have to be taken into account. First, videotaping large scale civil infrastructure scenes usually results in large video files filled with blurry, noisy, or simply redundant frames. This is often due to higher frame rate over camera speed ratio than necessary, camera and lens imperfections, and uncontrolled motions of the camera that results in motion blur. Only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames is a tough challenge. Second, the generated point cloud using a monocular videogrammetric pipeline is up to scale, i.e. the user has to know at least one dimension of an object in the scene to scale up the entire scene. This issue significantly narrows applications of generated point clouds in civil engineering domain since measurement is an essential part of every as-built documentation technology. Finally, due to various reasons including the lack of sufficient coverage during videotaping of the scene or existence of texture-less areas which are common in most indoor/outdoor civil engineering scenes, quality of the generated point clouds are sometimes poor. This deficiency appears in the form of outliers or existence of holes or gaps on surfaces of point clouds. Several researchers have focused on this particular problem; however, the major issue with all of the currently existing algorithms is that they basically treat holes and gaps as part of a smooth surface. This approach is not robust enough at the intersections of different surfaces or corners while there are sharp edges. A robust algorithm for filling holes/gaps should be able to maintain sharp edges/corners since they usually contain useful information specifically for applications in the civil and infrastructure engineering domain. To tackle these issues, this research presents and validates an improved videogrammetric pipeline for as built documentation of indoor/outdoor applications in civil engineering areas. The research consists of three main components: 1. Optimized selection of key frames for processing. It is necessary to choose a number of informative key frames to get the best results from the videogrammetric pipeline. This step is particularly important for outdoor environments as it is impossible to process a large number of frames existing in a large video clip. 2. Automated calculation of absolute scale of the scene. In this research, a novel approach for the process of obtaining absolute scale of points cloud by using 2D and 3D patterns is proposed and validated. 3. Point cloud data cleaning and filling holes on the surfaces of generated point clouds. The proposed algorithm to achieve this goal is able to fill holes/gaps on surfaces of point cloud data while maintaining sharp edges. In order to narrow the scope of the research, the main focus will be on two specific applications: 1. As built documentation of bridges and building as outdoor case studies. 2. As built documentation of offices and rooms as indoor case studies. Other potential applications of monocular videogrammetry in the civil engineering domain are out of scope of this research. Two important metrics, i.e. accuracy, completeness and processing time, are utilized for evaluation of the proposed algorithms.
2

Evaluation Techniques and Graph-Based Algorithms for Automatic Summarization and Keyphrase Extraction

Hamid, Fahmida 08 1900 (has links)
Automatic text summarization and keyphrase extraction are two interesting areas of research which extend along natural language processing and information retrieval. They have recently become very popular because of their wide applicability. Devising generic techniques for these tasks is challenging due to several issues. Yet we have a good number of intelligent systems performing the tasks. As different systems are designed with different perspectives, evaluating their performances with a generic strategy is crucial. It has also become immensely important to evaluate the performances with minimal human effort. In our work, we focus on designing a relativized scale for evaluating different algorithms. This is our major contribution which challenges the traditional approach of working with an absolute scale. We consider the impact of some of the environment variables (length of the document, references, and system-generated outputs) on the performance. Instead of defining some rigid lengths, we show how to adjust to their variations. We prove a mathematically sound baseline that should work for all kinds of documents. We emphasize automatically determining the syntactic well-formedness of the structures (sentences). We also propose defining an equivalence class for each unit (e.g. word) instead of the exact string matching strategy. We show an evaluation approach that considers the weighted relatedness of multiple references to adjust to the degree of disagreements between the gold standards. We publish the proposed approach as a free tool so that other systems can use it. We have also accumulated a dataset (scientific articles) with a reference summary and keyphrases for each document. Our approach is applicable not only for evaluating single-document based tasks but also for evaluating multiple-document based tasks. We have tested our evaluation method for three intrinsic tasks (taken from DUC 2004 conference), and in all three cases, it correlates positively with ROUGE. Based on our experiments for DUC 2004 Question-Answering task, it correlates with the human decision (extrinsic task) with 36.008% of accuracy. In general, we can state that the proposed relativized scale performs as well as the popular technique (ROUGE) with flexibility for the length of the output. As part of the evaluation we have also devised a new graph-based algorithm focusing on sentiment analysis. The proposed model can extract units (e.g. words or sentences) from the original text belonging either to the positive sentiment-pole or to the negative sentiment-pole. It embeds both (positive and negative) types of sentiment-flow into a single text-graph. The text-graph is composed with words or phrases as nodes, and their relations as edges. By recursively calling two mutually exclusive relations the model builds the final rank of the nodes. Based on the final rank, it splits two segments from the article: one with highly positive sentiment and the other with highly negative sentiments. The output of this model was tested with the non-polar TextRank generated output to quantify how much of the polar summaries actually covers the fact along with sentiment.

Page generated in 0.0303 seconds