• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 9
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 101
  • 101
  • 30
  • 25
  • 23
  • 23
  • 23
  • 22
  • 19
  • 17
  • 16
  • 15
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Incremental free-space carving for real-time 3D reconstruction

Lovi, David Israel Unknown Date
No description available.
22

3D RECONSTRUCTION USING MULTI-VIEW IMAGING SYSTEM

Huang, Conglin 01 January 2009 (has links)
This thesis presents a new system that reconstructs the 3D representation of dental casts. To maintain the integrity of the 3D representation, a standard model is built to cover the blind spots that the camera cannot reach. The standard model is obtained by scanning a real human mouth model with a laser scanner. Then the model is simplified by an algorithm which is based on iterative contraction of vertex pairs. The simplified standard model uses a local parametrization method to obtain the curvature information. The system uses a digital camera and a square tube mirror in front of the camera to capture multi-view images. The mirror is made of stainless steel in order to avoid double reflections. The reflected areas of the image are considered as images taken by the virtual cameras. Only one camera calibration is needed since the virtual cameras have the same intrinsic parameters as the real camera. Depth is computed by a simple and accurate geometry based method once the corresponding points are identified. Correspondences are selected using a feature point based stereo matching process, including fast normalized cross-correlation and simulated annealing.
23

Procedural reconstruction of buildings : towards large scale automatic 3D modeling of urban environments

Simon, Loïc 25 July 2011 (has links) (PDF)
This thesis is devoted to 2D and 3D modeling of urban environments using structured representations and grammars. Our approach introduces a semantic representation for buildings that encodes expected architectural constraints and is able to derive complex instances using fairly simple grammars. Furthermore, we propose two novel inference algorithms to parse images using such grammars. To this end, a steepest ascent hill climbing concept is considered to derive the grammar and the corresponding parameters from a single facade view. It combines the grammar constraints with the expected visual properties of the different architectural elements. Towards addressing more complex scenarios and incorporating 3D information, a second inference strategy based on evolutionary computational algorithms is adopted to optimize a two-component objective function introducing depth cues. The proposed framework was evaluated qualitatively and quantitatively on a benchmark of annotated facades, demonstrating robustness to challenging situations. Substantial improvement due to the strong grammatical context was shown in comparison to the performance of the same appearance models coupled with local priors. Therefore, our approach provides powerful techniques in response to increasing demand on large scale 3D modeling of real environments through compact, structured and semantic representations, while opening new perspectives for image understanding
24

Shape Estimation under General Reflectance and Transparency

Morris, Nigel Jed Wesley 31 August 2011 (has links)
In recent years there has been significant progress in increasing the scope, accuracy and flexibility of 3D photography methods. However there are still significant open problems where complex optical properties of mirroring or transparent objects cause many assumptions of traditional algorithms to break down. In this work we present three approaches that attempt to deal with some of these challenges using a few camera views and simple illumination. First, we consider the problem of reconstructing the 3D position and surface normal of points on a time-varying refractive surface. We show that two viewpoints are sufficient to solve this problem in the general case, even if the refractive index is unknown. We introduce a novel ``stereo matching'' criterion called refractive disparity, appropriate for refractive scenes, and develop an optimization-based algorithm for individually reconstructing the position and normal of each point projecting to a pixel in the input views. Second, we present a new method for reconstructing the exterior surface of a complex transparent scene with inhomogeneous interior. We capture images from each viewpoint while moving a proximal light source to a 2D or 3D set of positions giving a 2D (or 3D) dataset per pixel, called the scatter-trace. The key is that while light transport within a transparent scene's interior can be exceedingly complex, a pixel's scatter trace has a highly-constrained geometry that reveals the direct surface reflection, and leads to a simple ``Scatter-trace stereo'' algorithm for computing the exterior surface geometry. Finally, we develop a reconstruction system for scenes with reflectance properties ranging from diffuse to specular. We capture images of the scene as it is illuminated by a planar, spatially non-uniform light source. Then we show that if the source is translated to a parallel position away from the scene, a particular scene point integrates a magnified region of light from the plane. We observe this magnification at each pixel and show how it relates to the source-relative depth of the surface. Next we show how calibration relating the camera and source planes allows for robustness to specular objects and recovery of 3D surface points.
25

Sensitivity Analysis of Virtual Terrain Accuracy for Vision Based Algorithms

Marc, Róbert January 2012 (has links)
A number of three-dimensional virtual environments are available to develop vision-based robotic capabilities. These have the advantage of repeated trials at low cost compared to field testing. However, they still suffer from a lack of realism and credibility for validation and verification.This work consists of the creation and validation of state of the art virtual terrains for research in Martian rover vision-based navigation algorithms. This Master's thesis focuses on the creation of virtual environments, which are the exact imitations of the planetary terrain testbed at the European Space Agency's ESTEC site. Two different techniques are used to recreate the Martian-like site in a simulator. The first method uses a novel multi-view stereo reconstruction technique. The second method uses a high precision laser scanning system to accurately map the terrain.Comparison of real environment to the virtual environments is done at exact same locations by making use of captured stereo camera images. Ultimately, the differences will be characterized by the main known feature detectors (e.g. SURF, and SIFT).The present work led to the creation and validation of a database containing highly realistic virtual terrains which can be found on Mars for the purpose of vision-based control algorithms verification. / <p>Validerat; 20120821 (anonymous)</p>
26

Sentiment Analysis on Multi-view Social Data

Niu, Teng January 2016 (has links)
With the proliferation of social networks, people are likely to share their opinions about news, social events and products on the Web. There is an increasing interest in understanding users’ attitude or sentiment from the large repository of opinion-rich data on the Web. This can benefit many commercial and political applications. Primarily, the researchers concentrated on the documents such as users’ comments on the purchased products. Recent works show that visual appearance also conveys rich human affection that can be predicted. While great efforts have been devoted on the single media, either text or image, little attempts are paid for the joint analysis of multi-view data which is becoming a prevalent form in the social media. For example, paired with the posted textual messages on Twitter, users are likely to upload images and videos which may carry their affective states. One common obstacle is the lack of sufficient manually annotated instances for model learning and performance evaluation. To prompt the researches on this problem, we introduce a multi-view sentiment analysis dataset (MVSA) including a set of manually annotated image-text pairs collected from Twitter. The dataset can be utilized as a valuable benchmark for both single-view and multi-view sentiment analysis. In this thesis, we further conduct a comprehensive study on computational analysis of sentiment from the multi-view data. The state-of-the-art approaches on single view (image or text) or multi view (image and text) data are introduced, and compared through extensive experiments conducted on our constructed dataset and other public datasets. More importantly, the effectiveness of the correlation between different views is also studied using the widely used fusion strategies and advanced multi-view feature extraction methods.
27

Multi-view and three-dimensional (3D) images in wear debris analysis (WDA)

Mat Dan, Reduan January 2013 (has links)
Wear debris found in gear lubricating oil provides extremely valuable information on the nature and severity of gear faults as well as remaining gear life. The conventional off-line process of taking samples of oil for testing of wear debris is a hindrance because it is laborious, expensive, delays information collection, and is expert oriented. In view of these limitations, the development of automating wear debris particle analysis using various approaches has been ongoing for years. However, existing online technology does not encourage widespread use of wear debris analysis (WDA) in the industry. High costs coupled with expert and labour requirements have led users to use other types of condition-based maintenance, such as vibration. There is a need to develop a WDA technique that is relatively cheap, online, requires little expertise to handle, and provides more information for maintenance decision-making. This PhD thesis proposes a WDA technique which uses image processing and three-dimensional image reconstruction to diagnose the health of machinery. Its emphasis is on using the thickness and volume of the particles generated over time to predict the onset of gearbox failure, so that maintenance action can be taken before gears reach catastrophic failure.
28

Aprendizado de máquina parcialmente supervisionado multidescrição para realimentação de relevância em recuperação de informação na WEB / Partially supervised multi-view machine learning for relevance feedback in WEB information retrieval

Matheus Victor Brum Soares 28 May 2009 (has links)
Atualmente, o meio mais comum de busca de informações é a WEB. Assim, é importante procurar métodos eficientes para recuperar essa informação. As máquinas de busca na WEB usualmente utilizam palavras-chaves para expressar uma busca. Porém, não é trivial caracterizar a informação desejada. Usuários diferentes com necessidades diferentes podem estar interessados em informações relacionadas, mas distintas, ao realizar a mesma busca. O processo de realimentação de relevância torna possível a participação ativa do usuário no processo de busca. A idéia geral desse processo consiste em, após o usuário realizar uma busca na WEB permitir que indique, dentre os sites encontrados, quais deles considera relevantes e não relevantes. A opinião do usuário pode então ser considerada para reordenar os dados, de forma que os sites relevantes para o usuário sejam retornados mais facilmente. Nesse contexto, e considerando que, na grande maioria dos casos, uma consulta retorna um número muito grande de sites WEB que a satisfazem, das quais o usuário é responsável por indicar um pequeno número de sites relevantes e não relevantes, tem-se o cenário ideal para utilizar aprendizado parcialmente supervisionado, pois essa classe de algoritmos de aprendizado requer um número pequeno de exemplos rotulados e um grande número de exemplos não-rotulados. Assim, partindo da hipótese que a utilização de aprendizado parcialmente supervisionado é apropriada para induzir um classificador que pode ser utilizado como um filtro de realimentação de relevância para buscas na WEB, o objetivo deste trabalho consiste em explorar algoritmos de aprendizado parcialmente supervisionado, mais especificamente, aqueles que utilizam multidescrição de dados, para auxiliar na recuperação de sites na WEB. Para avaliar esta hipótese foi projetada e desenvolvida uma ferramenta denominada C-SEARCH que realiza esta reordenação dos sites a partir da indicação do usuário. Experimentos mostram que, em casos que buscas genéricas, que o resultado possui um bom diferencial entre sites relevantes e irrelevantes, o sistema consegue obter melhores resultados para o usuário / As nowadays the WEB is the most common source of information, it is very important to find reliable and efficient methods to retrieve this information. However, the WEB is a highly volatile and heterogeneous information source, thus keyword based querying may not be the best approach when few information is given. This is due to the fact that different users with different needs may want distinct information, although related to the same keyword query. The process of relevance feedback makes it possible for the user to interact actively with the search engine. The main idea is that after performing an initial search in the WEB, the process enables the user to indicate, among the retrieved sites, a small number of the ones considered relevant or irrelevant according with his/her required information. The users preferences can then be used to rearrange sites returned in the initial search, so that relevant sites are ranked first. As in most cases a search returns a large amount of WEB sites which fits the keyword query, this is an ideal situation to use partially supervised machine learning algorithms. This kind of learning algorithms require a small number of labeled examples, and a large number of unlabeled examples. Thus, based on the assumption that the use of partially supervised learning is appropriate to induce a classifier that can be used as a filter for relevance feedback in WEB information retrieval, the aim of this work is to explore the use of a partially supervised machine learning algorithm, more specifically, one that uses multi-description data, in order to assist the WEB search. To this end, a computational tool called C-SEARCH, which performs the reordering of the searched results using the users feedback, has been implemented. Experimental results show that in cases where the keyword query is generic and there is a clear distinction between relevant and irrelevant sites, which is recognized by the user, the system can achieve good results
29

Studies on Neural Network-Based Graph Embedding and Its Extensions / ニューラルネットワークに基づくグラフ埋め込みとその拡張に関する研究

Okuno, Akifumi 23 September 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第22807号 / 情博第737号 / 新制||情||126(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 下平 英寿, 教授 田中 利幸, 教授 鹿島 久嗣 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
30

Graph-based Multi-view Clustering for Continuous Pattern Mining

Åleskog, Christoffer January 2021 (has links)
Background. In many smart monitoring applications, such as smart healthcare, smart building, autonomous cars etc., data are collected from multiple sources and contain information about different perspectives/views of the monitored phenomenon, physical object, system. In addition, in many of those applications the availability of relevant labelled data is often low or even non-existing. Inspired by this, in this thesis study we propose a novel algorithm for multi-view stream clustering. The algorithm can be applied for continuous pattern mining and labeling of streaming data. Objectives. The main objective of this thesis is to develop and implement a novel multi-view stream clustering algorithm. In addition, the potential of the proposed algorithm is studied and evaluated on two datasets: synthetic and real-world. The conducted experiments study the new algorithm’s performance compared to a single-view clustering algorithm and an algorithm without transferring knowledge between chunks. Finally, the obtained results are analyzed, discussed and interpreted. Methods. Initially, we study the state-of-the-art multi-view (stream) clustering algorithms. Then we develop our multi-view clustering algorithm for streaming data by implementing transfer of knowledge feature. We present and explain in details the developed algorithm by motivating each choice made during the algorithm design phase. Finally, discussion of the algorithm configuration, experimental setup and the datasets chosen for the experiments are presented and motivated. Results. Different configurations of the proposed algorithm have been studied and evaluated under different experimental scenarios on two different datasets: synthetic and real-world. The proposed multi-view clustering algorithm has demonstrated higher performance on the synthetic data than on the real-world dataset. This is mainly due to not very good quality of the used real-world data. Conclusions. The proposed algorithm has demonstrated higher performance results on the synthetic dataset than on the real-world dataset. It can generate high-quality clustering solutions with respect to the used evaluation metrics. In addition, the transfer of knowledge feature has been shown to have a positive effect on the algorithm performance. A further study of the proposed algorithm on other richer and more suitable datasets, e.g., data collected from numerous sensors used for monitoring some phenomenon, is planned to be conducted in the future work.

Page generated in 0.0663 seconds