• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 41
  • 30
  • 11
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 180
  • 120
  • 58
  • 42
  • 38
  • 33
  • 33
  • 31
  • 27
  • 25
  • 23
  • 22
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Webové rozhraní pro zpracování obrazu / Web Interface for Image Processing

Beran, Milan January 2010 (has links)
This paper concerns design and implementation of a system which provides easier control of digital image processing console applications. The work is based on threes information technology domains: distributed systems, image processing and web technologies. The system consist of number of separated components communicating with each other in order of processing desired tasks. Control interface and the task daemon are implemented in PHP language. Image processing programs are implemented in C language using OpenCV graphic library. Control of the system is carried out through web graphical interface using dynamic control components implemented in Javascript language, jQuery library and jQueryUI interface. Part of the work is also a description of employment of the system in practical use in two environments, experiments concerning system performance and web interface user acceptance testing.
172

Rozpoznávání topologických informací z plánu křižovatky / Topology Recognition from Crossroad Plan

Huták, Petr January 2016 (has links)
This master‘s thesis describes research, design and development of system for topology recognition from crossroad plan. It explains the methods used for image processing, image segmentation, object recognition. It describes approaches in processing of maps represented by raster images and target software, in which the final product of practical part of project will be integrated. Thesis is focused mainly on comparison of different approaches in feature extraction from raster maps and determination their semantic meaning. Practical part of project is implemented in C# language with OpenCV library.
173

Detekce a identifikace typu obratle v CT datech onkologických pacientů / Vertebra detection and identification in CT oncological data

Věžníková, Romana January 2017 (has links)
Automated spine or vertebra detection and segmentation from CT images is a difficult task for several reasons. One of the reasons is unclear vertebra boundaries and indistinct boundaries between vertebra. Next reason is artifacts in images and high degree of anatomical complexity. This paper describes the design and implementation of vertebra detection and classification in CT images of cancer patients, which adds to the complexity because some of vertebrae are deformed. For the vertebra segmentation, the Otsu’s method is used. Vertebra detection is based on search of borders between individual vertebra in sagittal planes. Decision trees or the generalized Hough transform is applied for the identification whereas the vertebra searching is based on similarity between each vertebra model shape and planes of CT scans.
174

Detekce poznávací značky v obraze / Image-Based Licence Plate Recognition

Vacek, Michal January 2009 (has links)
In first part thesis contains known methods of license plate detection. Preprocessing-based methods, AdaBoost-based methods and extremal region detection methods are described.Finally, there is a described and implemented own access using local detectors to creating visual vocabulary, which is used to plate recognition. All measurements are summarized on the end.
175

Rekonstrukce 3D objektu z obrazových dat / 3D Objects Reconstruction from Image Data

Cír, Filip January 2008 (has links)
This paper deals with 3D reconstruction of objects from image data. There is describes theoretical basis of the 3D optical scanning. Handheld 3D optical scanner setup is described composed of a single camera and a line laser whose position is fixed with respect to the camera. Set of image markers and a simple real-time detection algorithm are proposed. Detected markers are used to estimate position and orientation of the camera. Finally, laser detection and triangulation of points lying on object surface are discussed.
176

Využití GPU pro algoritmy grafiky a zpracování obrazu / Exploitation of GPU in graphics and image processing algorithms

Jošth, Radovan January 2015 (has links)
Táto práca popisuje niekoľko vybraných algoritmov, ktoré boli primárne vyvinuté pre CPU procesory, avšak vzhľadom k vysokému dopytu po ich vylepšeniach sme sa rozhodli ich využiť v prospech GPGPU (procesorov grafického adaptéra). Modifikácia týchto algoritmov bola zároveň cieľom nášho výskumu, ktorý  bol prevedený pomocou CUDA rozhrania. Práca je členená podľa troch skupín algoritmov, ktorým sme sa venovali: detekcia objektov v reálnom čase, spektrálna analýza obrazu a detekcia čiar v reálnom čase. Pre výskum detekcie objektov v reálnom čase sme zvolili použitie LRD a LRP funkcií.  Výskum spektrálnej analýzy obrazu bol prevedný pomocou PCA a NTF algoritmov. Pre potreby skúmania detekcie čiar v reálnom čase sme používali dva rôzne spôsoby modifikovanej akumulačnej schémy Houghovej transformácie. Pred samotnou časťou práce venujúcej sa konkrétnym algoritmom a predmetu skúmania, je v úvodných kapitolách, hneď po kapitole ozrejmujúcej dôvody skúmania vybranej problematiky, stručný prehľad architektúry GPU a GPGPU. Záverečné kapitoly sú zamerané na konkretizovanie vlastného prínosu autora, jeho zameranie, dosiahnuté výsledky a zvolený prístup k ich dosiahnutiu. Súčasťou výsledkov je niekoľko vyvinutých produktov.
177

Schädigungsprognose mittels Homogenisierung und mikromechanischer Materialcharakterisierung

Goldmann, Joseph 01 October 2018 (has links)
In der vorliegenden Arbeit wird die Frage untersucht, ob effektive Eigenschaften von Verbunden auch nach dem Auftreten einer Dehnungslokalisierung aufgrund von entfestigendem Materialverhalten noch durch numerische Homogenisierungsmethoden berechnet werden können. Ihr Nutzen für diesen Anwendungsfall wird in der Literatur kritisch beurteilt. Aus diesem Grund werden hier systematisch alle Teilaufgaben betrachtet, die zu diesem Zweck gelöst werden müssen. Die erste dieser Aufgaben ist die Charakterisierung der einzelnen Verbundbestandteile. Zur Demonstration einer experimentell gestützten Charakterisierung wird ein glasfaserverstärktes Epoxidharz als Beispielmaterial gewählt. Neben der Beschreibung von Faser- und Matrixmaterial wird besonderes Augenmerk auf die Charakterisierung der Grenzschicht zwischen beiden gelegt. Die für die Glasfasern vorliegenden Festigkeitsmessungen entsprechen nicht der Kettenhypothese. Daher werden zahlreiche Verallgemeinerungen der Weibull-Verteilung untersucht, um störende Effekte zu erfassen. Schließlich werden Wahrscheinlichkeitsverteilungen hergeleitet, die Faserbrüche im Bereich der Einspannung einbeziehen. Die Messwerte können von diesen Verteilungen gut wiedergegeben werden. Zusätzlich macht ihre Anwendung das aufwändige Aussortieren und Wiederholen jener Experimente unnötig, bei denen der Faserbruch im Klemmbereich auftritt. Zur Modellierung der Grenzfläche wird ein Kohäsivzonengesetz entwickelt. Die Bestimmung seiner Parameter erfolgt anhand von Daten aus Pullout- und Einzelfaserfragmentierungsversuchen. Aus diesen ermittelte Festigkeiten und Energiefreisetzungsraten weisen eine sehr gute Übereinstimmung zwischen beiden Versuchen auf. Dabei erfolgt die Parameteridentifikation mithilfe von Finite-Elemente-Modellen anstatt der häufig genutzten vereinfachten analytischen Modelle, welche üblicherweise eine schlechtere Übereinstimmung erreichen. Sobald eine Dehnungslokalisierung auftritt, ist neben der Materialmodellierung auch das Homogenisierungsschema zu verallgemeinern. Zu diesem gehören die Generierung repräsentativer Volumenelemente, Randbedingungen (RB) und ein Mittelungsoperator. Anhand des aktuellen Standes der Literatur werden die Randbedingungen als ein signifikanter Schwachpunkt von Homogenisierungsverfahren erkannt. Daher erfolgt die Untersuchung periodischer RB, linearer Verschiebungsrandbedingungen und minimal kinematischer RB sowie zweier adaptiver RB, nämlich Lokalisierungspfad-ausgerichteter RB und generalisiert periodischer RB. Unter der Bezeichnung Tesselationsrandbedingungen wird ein weiterer Typ adaptiver RB vorgeschlagen. Zunächst erfolgt der Beweis, dass alle drei adaptiven RB die Hill-Mandel-Bedingung erfüllen. Des Weiteren wird mittels einer Modifikation der Hough-Transformation ein systematischer Fehler derselben bei der Bestimmung der Richtung von Lokalisierungszonen eliminiert. Schließlich werden die Eigenschaften aller Randbedingungen an verschiedenen Beispielen demonstriert. Dabei zeigt sich, dass nur Tesselationsrandbedingungen sowohl beliebige Richtungen von Lokalisierungszonen erlauben als auch fehlerhafte Lokalisierungen in Eckbereichen ausschließen. Zusammengefasst können in der Literatur geäußerte grundlegende Einschränkungen hinsichtlich der Anwendbarkeit numerischer Homogenisierungsverfahren beim Auftreten von Dehnungslokalisierungen aufgehoben werden. Homogenisierungsmethoden sind somit auch für entfestigendes Materialverhalten anwendbar. / The thesis at hand is concerned with the question if numerical homogenization schemes can be of use in deriving effective material properties of composite materials after the onset of strain localization due to strain softening. In this case, the usefulness of computational homogenization methods has been questioned in the literature. Hence, all the subtasks to be solved in order to provide a successful homogenization scheme are investigated herein. The first of those tasks is the characterization of the constituents, which form the composite. To allow for an experimentally based characterization an exemplary composite has to be chosen, which herein is a glass fiber reinforced epoxy. Hence the constituents to be characterized are the epoxy and the glass fibers. Furthermore, special attention is paid to the characterization of the interface between both materials. In case of the glass fibers, the measured strength values do not comply with the weakest link hypothesis. Numerous generalizations of the Weibull distribution are investigated, to account for interfering effects. Finally, distributions are derived, that incorporate the possibility of failure inside the clamped fiber length. Application of such a distribution may represent the measured data quite well. Additionally, it renders the cumbersome process of sorting out and repeating those tests unnecessary, where the fiber fails inside the clamps. Identifying the interface parameters of the proposed cohesive zone model relies on data from pullout and single fiber fragmentation tests. The agreement of both experiments in terms of interface strength and energy release rate is very good, where the parameters are identified by means of an evaluation based on finite element models. Also, the agreement achieved is much better than the one typically reached by an evaluation based on simplified analytical models. Beside the derivation of parameterized material models as an input, the homogenization scheme itself needs to be generalized after the onset of strain localization. In an assessment of the current state of the literature, prior to the generation of representative volume elements and the averaging operator, the boundary conditions (BC) are identified as a significant issue of such a homogenization scheme. Hence, periodic BC, linear displacement BC and minimal kinematic BC as well as two adaptive BC, namely percolation path aligned BC and generalized periodic BC are investigated. Furthermore, a third type of adaptive BC is proposed, which is called tesselation BC. Firstly, the three adaptive BC are proven to fulfill the Hill-Mandel condition. Secondly, by modifying the Hough transformation an unbiased criterion to determine the direction of the localization zone is given, which is necessary for adaptive BC. Thirdly, the properties of all the BC are demonstrated in several examples. These show that tesselation BC are the only type, that allows for arbitrary directions of localization zones, yet is totally unsusceptible to spurious localization zones in corners of representative volume elements. Altogether, fundamental objections, that have been raised in the literature against the application of homogenization in situations with strain localization, are rebutted in this thesis. Hence, the basic feasibility of homogenization schemes even in case of strain softening material behavior is shown.
178

由地面光達資料自動重建建物模型之研究 / Automatic Generation of Building Model from Ground-Based LIDAR Data

詹凱軒, Kai-Hsuan,Chan Unknown Date (has links)
地面光達系統可以快速獲取大量且高精度之點雲資料,這些點雲資料不但記錄了被掃描物體之三維資訊,還包含其色彩訊息。但因光達點雲資料量過於龐大,若要直接於電腦上展示其三維模型,必須配合有效的資料處理技術,才能迅速且即時地將資料顯示於螢幕上。 我們針對地面光達系統獲取之建物點雲,提出一套處理方法,期盼透過少數關鍵點雲,就足以表示整個建物的模型。研究流程主要分為三階段,首先採用三維網格資料結構,從地面光達系統獲取之建物點雲中,萃取出關鍵點雲,並利用三維不規則三角網建模方式,進行模型建構工作,產生建物大略模型。其次再逐點判斷是否將剩餘之點加入此模型中,持續更新模型細微之部分。最後將點雲中的色彩資訊轉成影像,敷貼在模型表面上,讓整個模型更為逼真。 我們以政大綜合大樓進行實驗,成功地減少大量冗餘的點雲資料,只需要約原始點雲的1%,就足以將綜合大樓模型建構完成。為了達到可以從不同視角即時瀏覽建物模型,我們採用虛擬實境語言(VRML)來描述處理後的三維模型,遠端使用者只需透過一般網頁瀏覽器,即可即時顯示處理過的三維建物模型。 / Ground-based LIDAR system can be used to detect the surface of the buildings on the earth. In general, it produces large amount of high-precision point cloud data. These data include not only the three-dimensional space information, but also the color information. However, the number of point cloud data is huge and is difficult to be displayed efficiently. It’s necessary to use efficient data processing techniques in order to display these point cloud data in real-time. In this research, we construct the three-dimensional building model using the key points selected from a given set of point cloud data. The major works of our scheme consists of three parts. In the first part, we extract the key points from the given point cloud data through the help of a three-dimensional grid. These key points are used to construct a primitive model of the building. Then, we checked all the remaining points and decided whether these points are essential to the final building model. Finally, we transformed the color information into images and then used the transformed images to represent generic surface material of the three-dimensional model of the building. The goal of the final step is to make the model more realistic. In the experiments, we used the twin-tower of our university as our target. We successfully reduced the required data in displaying the building model and only about one percent of the original point cloud data are used in the final model. Hence, one can see the twin-tower from various view points in real-time. In addition, we use VRML to describe our model and the users can browse the results in real-time on internet.
179

Reconstruction of trees from 3D point clouds

Stålberg, Martin January 2017 (has links)
The geometrical structure of a tree can consist of thousands, even millions, of branches, twigs and leaves in complex arrangements. The structure contains a lot of useful information and can be used for example to assess a tree's health or calculate parameters such as total wood volume or branch size distribution. Because of the complexity, capturing the structure of an entire tree used to be nearly impossible, but the increased availability and quality of particularly digital cameras and Light Detection and Ranging (LIDAR) instruments is making it increasingly possible. A set of digital images of a tree, or a point cloud of a tree from a LIDAR scan, contains a lot of data, but the information about the tree structure has to be extracted from this data through analysis. This work presents a method of reconstructing 3D models of trees from point clouds. The model is constructed from cylindrical segments which are added one by one. Bayesian inference is used to determine how to optimize the parameters of model segment candidates and whether or not to accept them as part of the model. A Hough transform for finding cylinders in point clouds is presented, and used as a heuristic to guide the proposals of model segment candidates. Previous related works have mainly focused on high density point clouds of sparse trees, whereas the objective of this work was to analyze low resolution point clouds of dense almond trees. The method is evaluated on artificial and real datasets and works rather well on high quality data, but performs poorly on low resolution data with gaps and occlusions.
180

Rozpoznávání ručně psaného písma pomocí neuronových sítí / Handwritten Character Recognition Using Artificial Neural Networks

Horký, Vladimír January 2012 (has links)
Neural networks with algorithm back-propagation will be presented in this work. Theoretical background of the algorithm will be explained. The problems with training neural nets will be solving there. The work discuss some techniques of image preprocessing and image extraction features, which is one of main part in classification. Some part of work discuss few experiments with neural nets with chosen image features.

Page generated in 0.0149 seconds