• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Carotid plaque vulnerability assessment by microscopic morphology analysis, ultrasound and 3D model reconstruction

Choudhury, Ahsan Raza January 2012 (has links)
Research suggests that plaque morphology plays a crucial role in determining plaque vulnerability. However the relationship between plaque morphology and rupture is still not clearly understood due to the limited information of plaque morphology. The aim of this study is to improve our understanding of the relationship between plaque morphology and rupture, and to use this to predict the risk of plaque rupture from the morphology at the molecular level. This can enable the identification of culprit lesions in clinical situations for assessing plaque rupture risk. Histological assessments were carried out on 18 carotid plaque specimens. The 3-D collagen, lipid and macrophage distributions along the entire length of the plaque were analysed in both ruptured and non-ruptured symptomatic plaques. In addition, plaque morphology on the rupture sites were examined and compared with the surrounding regions. It was found that ruptured plaques had thinner fibrous caps and larger lipid cores compared to non-ruptured plaques. Also, ruptured plaques had lower collagen content compared to non-ruptured plaques, and higher collagen contents upstream compared to downstream region from the plaque throat. At the rupture site there was lower collagen content, and a larger lipid core thickness behind a thin fibrous cap compared with the mean for the longitudinally adjacent and circumferential regions. Macrophage cells were located nearer to the boundary of the luminal wall in ruptured plaques. For both groups, the area occupied by macrophages is greater at the upstream shoulder of the plaque. There is a positive correlation between macrophage area and lipid core area, a negative correlation between macrophage area and collagen content, and between lipid core size and collagen content for both plaque groups. 3D reconstruction of ex-vivo specimens of carotid plaques were carried out by a combined analysis of US imaging and histology. To reconstruct accurate 3D plaque morphology, the non-linear tissue distortion in histological images caused by specimen preparation was corrected by a finite element (FE) based deformable registration procedure. This study shows that it is possible to generate a 3D patient specific plaque model using this method. In addition, the study also quantitatively assesses the tissue distortion caused by histological procedures. It shows that at least 30% tissue shrinkage is expected for plaque tissues. The histology analysis result was also used to evaluate ultrasound (US) tissue characterization accuracy. An ex-vivo 2D ultrasound scan set-up was used to obtain serial transverse images through an atherosclerotic plaque. The different plaque component region obtained from ultrasound images was compared with the associated histology result and photograph of the sections. Plaque tissue characterisation using ex-vivo US can be performed qualitatively, whereas lipid core assessment from ultrasound scan can be semi-quantitative. This finding combined with the negative correlation between lipid core size and collagen content, suggests the ability of US to indirectly quantify plaque collagen content. This study may serve as a platform for future studies on improving ultrasound tissue characterization, and may also potentially be used in risk assessment of plaque rupture.
2

AUTOMATIC IMAGE TO MODEL ALIGNMENT FOR PHOTO-REALISTIC URBAN MODEL RECONSTRUCTION

Partington, Mike 01 January 2001 (has links)
We introduce a hybrid approach in which images of an urban scene are automatically alignedwith a base geometry of the scene to determine model-relative external camera parameters. Thealgorithm takes as input a model of the scene and images with approximate external cameraparameters and aligns the images to the model by extracting the facades from the images andaligning the facades with the model by minimizing over a multivariate objective function. Theresulting image-pose pairs can be used to render photo-realistic views of the model via texturemapping.Several natural extensions to the base hybrid reconstruction technique are also introduced. Theseextensions, which include vanishing point based calibration refinement and video stream basedreconstruction, increase the accuracy of the base algorithm, reduce the amount of data that mustbe provided by the user as input to the algorithm, and provide a mechanism for automaticallycalibrating a large set of images for post processing steps such as automatic model enhancementand fly-through model visualization.Traditionally, photo-realistic urban reconstruction has been approached from purely image-basedor model-based approaches. Recently, research has been conducted on hybrid approaches, whichcombine the use of images and models. Such approaches typically require user assistance forcamera calibration. Our approach is an improvement over these methods because it does notrequire user assistance for camera calibration.
3

Quantitative image based modelling of food on aplate

M. Fard, Farhad January 2012 (has links)
The main purpose of this work is to reconstruct 3D model of an entire scene byusing two ordinary cameras. We develop a mobile phone application, based onstereo vision and image analysis algorithms, executed either locally or on a remotehost, to calculate the dietary intake using the current questionnaire and the mobilephone photographs. The information of segmented 3D models are used to calculatethe volume -and then the calories- of a person’s daily intake food. The method ischecked using different solid food samples, in different camera arrangements. Theresults shows that the method successfully reconstructs 3D model of different foodsample with high details.
4

PyMORESANE: A Pythonic and CUDA-accelerated implementation of the MORESANE deconvolution algorithm

Kenyon, Jonathan January 2015 (has links)
The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
5

A Study on Field Work Support in Nuclear Power Plants Utilizing 3D Reconstruction Model and Tagging / 3次元再構成モデルとタギングを活用した原子力発電プラントの現場作業支援に関する研究

Harazono, Yuki 23 May 2022 (has links)
京都大学 / 新制・課程博士 / 博士(エネルギー科学) / 甲第24115号 / エネ博第448号 / 新制||エネ||84(附属図書館) / 京都大学大学院エネルギー科学研究科エネルギー社会・環境科学専攻 / (主査)教授 下田 宏, 教授 宇根﨑 博信, 教授 黒崎 健 / 学位規則第4条第1項該当 / Doctor of Energy Science / Kyoto University / DGAM
6

Binary level static analysis / Analyse statique au niveau binaire

Djoudi, Adel 02 December 2016 (has links)
Les méthodes de vérification automatique des logiciels connaissent un succès croissant depuis le début des années 2000, suite à plusieurs succès industriels (Microsoft, Airbus, etc.). L'analyse statique vise, à partir d'une description du programme, à inférer automatiquement des propriétés vérifiées par celui-ci. Les techniques standards d'analyse statique travaillent sur le code source du logiciel, écrit par exemple en C ou Java. Cependant, avoir accès au code source n'est pas envisageable pour de nombreuses applications relatives à la sécurité, soit que le code source n'est pas disponible (code mobile, virus informatiques), soit que le développeur ne veut pas le divulguer (composants sur étagère, certification par un tiers).Nous nous intéressons dans cette thèse à la conception et au développement d'une plate-forme d'analyse statique de code binaire à des fins d'analyse de sécurité. Nos principales contributions se font à trois niveaux: sémantique, implémentation et analyse statique.Tout d'abord, la sémantique des programmes binaires analysés est basée sur un formalisme générique appelé DBA qui a été enrichi avec des mécanismes de spécification et d'abstraction. La définition de la sémantique des programmes binaires requiert aussi un modèle mémoire adéquat.Nous proposons un modèle mémoire adapté au binaire, inspiré des travaux récents sur le code C bas-niveau. Ce nouveau modèle permet de profiter de l'abstraction du modèle à régions tout en gardant l'expressivité du modèle plat.Ensuite, notre plate-forme d'analyse de code binaire nommée BinSec offre trois services de base: désassemblage, simulation et analyse statique.Chaque instruction machine est traduite vers un bloc d'instructions DBA avec une sémantique équivalente. Une large partie des instructions x86 est gérée par la plateforme. Une passe de simplification permet d'éliminer les calculs intermédiaires inutiles afin d'optimiser le fonctionnement des analyses ultérieures. Nos simplifications permettent notamment d'éliminer jusqu'à75% des mises à jours de flags.Enfin, nous avons développé un moteur d'analyse statique de programmes binaires basé sur l'interprétation abstraite. Outre des domaines adaptés aux spécificités du code binaire, nous nous sommes concentrés sur le contrôle par l'utilisateur du compromis entre précision/correction et efficacité. De plus, nous proposons une approche originale de reconstruction de conditions dehaut-niveau à partir des conditions bas-niveau afin de gagner plus de précision d'analyse. L'approche est sûre, efficace, indépendante de la plateforme cibleet peut atteindre des taux de reconstruction très élevés. / Automatic software verification methods have seen increasing success since the early 2000s, thanks to several industrial successes (Microsoft, Airbus, etc.).Static program analysis aims to automatically infer verified properties of programs, based on their descriptions. The standard static analysis techniques apply on the software source code, written for instance in C or Java. However, access to source code is not possible for many safety-related applications, whether the source code is not available (mobile code, computer virus), or the developer does not disclose it (shelf components, third party certification).We are interested in this dissertation in design and development of a static binary analysis platform for safety analysis. Our contributions are made at three levels: semantics, implementation and static analysis.First, the semantics of analyzed binary programs is based on a generic, simple and concise formalism called DBA. It is extended with some specification and abstraction mechanisms in this dissertation. A well defined semantics of binary programs requires also an adequate memory model. We propose a new memory model adapted to binary level requirements and inspired from recent work on low-level C. This new model allows to enjoy the abstraction of the region-based memory model while keeping the expressiveness of the flat model.Second, our binary code analysis platform BinSec offers three basic services:disassembly, simulation and static analysis. Each machine instruction is translated into a block of semantically equivalent DBA instructions. The platform handles a large part of x86 instructions. A simplification step eliminates useless intermediate calculations in order to ease further analyses. Our simplifications especially allow to eliminate up to 75% of flag updates.Finally, we developed a static analysis engine for binary programs based on abstract interpretation. Besides abstract domains specifically adapted to binary analysis, we focused on the user control of trade offs between accuracy/correctness and efficiency. In addition, we offer an original approach for high-level conditions recovery from low-level conditions in order to enhance analysis precision. The approach is sound, efficient, platform-independent and it achieves very high ratio of recovery.
7

從多視角影像萃取密集影像對應 / Dense image matching from multi-view images

蔡瑞陽, Tsai, Jui Yang Unknown Date (has links)
在三維模型的建構上,對應點的選取和改善佔有相當重要的地位。對應點的準確性影響整個建模的成效。本論文中我們提出了新的方法,透過極線轉換法(epipolar transfer)在多視角影像中做可見影像過濾和對應點改善。首先,我們以Furukawa所提出的方法,建構三維補綴面並加以做旋轉和位移,或是單純在二維影像移動對應點兩種方式選取初始對應點。然後再以本研究所提出的極線轉換法找到適當位置的對應點。接下來我們將每個三維點的可見影像(visible image)再次透過極線轉換法去檢查可見影像上的對應點位置是否適當,利用門檻值將不合適的對應點過濾掉。進一步針對對應點位置的改善和篩選,期望透過極線幾何法來找到位置最準確的對應點位置。最後比較實驗成果,觀察到以本研究所提出的方法做改善後,對應點準確度提高近百分之十五。 / In the construction of three-dimensional models, the selection and refinement of the correspondences plays a very important rule. The accuracy of the correspondences affects modeling results. In this paper, we proposed a new approach, that is filtering the visible images and improving the corresponding points in multi-view images by epipolar transfer method. First of all, we use Furukawa proposed method to construct three-dimensional patches and making rotation and displacement, or simply move the corresponding points in two-dimensional images are two ways to select the initial corresponding points. And then to use epipolar transfer method in this study to find the appropriate location of the corresponding points. Next we will check the corresponding points on the each 3D point’s visible image again through the polar transformation method , and we use the threshold value to filter out the corresponding points. Further the location of the corresponding points for the improvement and screening, hoped that through the epipolar geometry method to find the most accurate corresponding points’ location. Experimental results are compared to observe the improvements that the method proposed in this study, the corresponding point accuracy by nearly 15 percent.
8

Rechnergestützte Planung und Rekonstruktion für individuelle Langzeit-Knochenimplantate am Beispiel des Unterkiefers

Sembdner, Philipp 29 March 2017 (has links) (PDF)
Die vorliegende Arbeit befasst sich mit der Entwicklung und Umsetzung von Methoden und Werkzeugen zur Bereitstellung von Modellen und Randbedingungen für die Konstruktion individueller Langzeit-Knochenimplantate (Konstruktionsvorbereitung). Grundlage dabei ist, dass die Planung aus medizinischer Sicht z.B. durch einen Chirurgen und die Konstruktion unter technischen Aspekten z.B. durch einen Konstrukteur getrennt erfolgt. Hierfür wird ein erarbeitetes Planungskonzept vorgestellt, welches sowohl die geplanten geometrischen Merkmale, als auch weiterführende Metadaten beinhaltet (Randbedingungen). Die Übergabe dieser Planungsdaten an die Konstruktion erfolgt über eine dafür entworfene Formatbeschreibung im Kontext der Schnittstelle zwischen Mediziner und Ingenieur. Weiterführend wird die Notwendigkeit von speziellen Funktionen für die Konstruktion von individuellen Implantaten in der Arbeitsumgebung des Konstrukteurs (z.B. Modelliersystem – CAD) am Beispiel der konturlinienbasierten Modellrekonstruktion erörtert. Die gesamtheitliche Basis bildet eine durchgängig digitale Prozesskette zur Datenaufbereitung, Konstruktion und Fertigung. Die Anwendbarkeit der Methoden und zweier umgesetzter Demonstratoren wurde innerhalb eines interdisziplinär angelegten Projektes am realen Patientenfall bestätigt.
9

基於多視角幾何萃取精確影像對應之研究 / Accurate image matching based on multiple view geometry

謝明龍, Hsieh, Ming Lung Unknown Date (has links)
近年來諸多學者專家致力於從多視角影像獲取精確的點雲資訊,並藉由點雲資訊進行三維模型重建等研究,然而透過多視角影像求取三維資訊的精確度仍然有待提升,其中萃取影像對應與重建三維資訊方法,是多視角影像重建三維資訊的關鍵核心,決定點雲資訊的形成方式與成效。 本論文中,我們提出了一套新的方法,由多視角影像之間的幾何關係出發,萃取多視角影像對應與重建三維點,可以有效地改善對應點與三維點的精確度。首先,在萃取多視角影像對應的部份,我們以相互支持轉換、動態高斯濾波法與綜合性相似度評估函數,改善補綴面為基礎的比對方法,提高相似度測量值的辨識力與可信度,可從多視角影像中獲得精確的對應點。其次,在重建三維點的部份,我們使用K均值分群演算法與線性內插法發掘潛在的三維點,讓求出的三維點更貼近三維空間真實物體表面,能在多視角影像中獲得更精確的三維點。 實驗結果顯示,採用本研究所提出的方法進行改善後,在對應點精確度的提升上有很好的成效,所獲得的點雲資訊存在數萬個精確的三維點,而且僅有少數的離群點。 / Recently, many researchers pay attentions in obtaining accurate point cloud data from multi-view images and use these data in 3D model reconstruction. However, this accuracy still needs to be improved. Among these researches, the methods in extracting the corresponding points as well as computing the 3D point information are the most critical ones. These methods practically affect the final results of the point cloud data and the 3D models so constructed. In this thesis, we propose new approaches, based on multi-view geometry, to improve the accuracy of corresponding points and 3D points. Mutual support transformation, dynamic Gaussian filtering, and similarity evaluation function were used to improve the patch-based matching methods in multi-view image correspondence. Using these mechanisms, the discrimination ability and reliability of the similarity function and, hence, the accuracy of the extracted corresponding points can be greatly improved. We also used K-mean algorithms and linear interpolations to find the better 3D point candidates. The 3D point so computed will be much closer to the surface of the actual 3D object. Thus, this mechanism will produce highly accurate 3D points. Experimental results show that our mechanism can improve the accuracy of corresponding points as well as the 3D point cloud data. We successfully generated accurate point cloud data that contains tens of thousands 3D points, and, moreover, only has a few outliers.
10

Rechnergestützte Planung und Rekonstruktion für individuelle Langzeit-Knochenimplantate am Beispiel des Unterkiefers

Sembdner, Philipp 25 January 2017 (has links)
Die vorliegende Arbeit befasst sich mit der Entwicklung und Umsetzung von Methoden und Werkzeugen zur Bereitstellung von Modellen und Randbedingungen für die Konstruktion individueller Langzeit-Knochenimplantate (Konstruktionsvorbereitung). Grundlage dabei ist, dass die Planung aus medizinischer Sicht z.B. durch einen Chirurgen und die Konstruktion unter technischen Aspekten z.B. durch einen Konstrukteur getrennt erfolgt. Hierfür wird ein erarbeitetes Planungskonzept vorgestellt, welches sowohl die geplanten geometrischen Merkmale, als auch weiterführende Metadaten beinhaltet (Randbedingungen). Die Übergabe dieser Planungsdaten an die Konstruktion erfolgt über eine dafür entworfene Formatbeschreibung im Kontext der Schnittstelle zwischen Mediziner und Ingenieur. Weiterführend wird die Notwendigkeit von speziellen Funktionen für die Konstruktion von individuellen Implantaten in der Arbeitsumgebung des Konstrukteurs (z.B. Modelliersystem – CAD) am Beispiel der konturlinienbasierten Modellrekonstruktion erörtert. Die gesamtheitliche Basis bildet eine durchgängig digitale Prozesskette zur Datenaufbereitung, Konstruktion und Fertigung. Die Anwendbarkeit der Methoden und zweier umgesetzter Demonstratoren wurde innerhalb eines interdisziplinär angelegten Projektes am realen Patientenfall bestätigt.

Page generated in 0.0872 seconds