• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 7
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 12
  • 12
  • 11
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Received radiation dose assessment for nuclear plants personnel by video-based surveillance

Jorge, Carlos Alexandre Fructuoso 07 1900 (has links)
Submitted by Almir Azevedo (barbio1313@gmail.com) on 2015-08-24T17:42:07Z No. of bitstreams: 1 CARLOS ALEXANDRE F. JORGE D.pdf: 11356748 bytes, checksum: 59927b7a303fb41d249f403942824b9a (MD5) / Made available in DSpace on 2015-08-24T17:42:07Z (GMT). No. of bitstreams: 1 CARLOS ALEXANDRE F. JORGE D.pdf: 11356748 bytes, checksum: 59927b7a303fb41d249f403942824b9a (MD5) Previous issue date: 2015-07 / This work proposes the development of a system to evaluate received radiation dose for nuclear plants personnel. The system is conceived to operate in a complementary form to the existing approaches for radiological protection, thus o ering redundancy, what is desirable for critical plants operation. The proposed system must operate in an independent form on the actions to be performed by the operators under evaluation. Therefore, it was decided it would be based on methods used for video surveillance. The nuclear plant used as example is Argonauta Nuclear Research Reactor, belonging to Instituto de Engenharia Nuclear, Comiss~ao Nacional de Energia Nuclear (Nuclear Engineering Institute, National Nuclear Energy Commission). During this thesis research, both radiation dose rate distribution and video databases were obtained. Methods available in the literature, for targets detection and/or tracking, were evaluated for this database. From these results, a new system was proposed, with the purpose of meeting the requisites for this particular application. Given the tracked positions of each worker, the radiation dose received by each one during tasks execution is estimated, and may serve as part of a decision support system.
22

Image processing algorithms for the visualization of interventional devices in X-ray fluoroscopy / Algorithmes de traitement d'images pour la visualisation d'outils interventionnels dans des séquences de fluoroscopie par rayons X

Bismuth, Vincent 09 January 2012 (has links)
La pose de stent est l'option de traitement la plus courante de la maladie coronarienne, l'une des principales causes de mortalité dans le monde. Lors d'une procédure de pose de stent, le médecin insère des outils chirurgicaux dans le réseau vasculaire du patient. La progression de ces outils a l’intérieur du corps est suivie en temps réel sous uroscopie par rayons X. Trois outils, en particulier, jouent un rôle crucial dans la procédure : le guide, le ballon d'angioplastie et le stent. Le guide apparaît dans les images sous la forme d'une structure curviligne ne. Le ballon, monte sur le guide, est équipé de deux marqueurs radio-opaques à ses extrémités. Le stent est un maillage métallique qui se projette en une forme complexe dans les images uroscopiques. Le stent, dont le bon déploiement est essentiel au succès du geste médical, est souvent très difficilement visible dans les images. Les travaux présentés dans cette thèse poursuivent un double objectif. Il s'agit d'une part, de concevoir, d’étudier et de valider des techniques de traitement d'image visant à améliorer la visualisation des stents. D'autre part, nous étudions la traitement des structures curvilignes (comme les guides) pour lesquelles nous proposons un nouvel outil. Nous présentons des algorithmes de traitement d'image dédiés a la visualisation 2D et3D des stents. Nous sommes amenés, dans ce but, à détecter, suivre et recaler, de manière complètement automatique, les outils nécessaires a la pose de stent que sont le guide et le ballon. Le stent étant a peine visible dans les images, nous ne cherchons pas à le localiser directement à l'aide de techniques de traitement d'images. La position et le mouvement du stent sont déterminés par nos algorithmes […]. Nous avons évalué la performance des ces outils pour la visualisation des stents en 2D, sur une large base de près de 200 cas cliniques. Il en ressort que notre méthode surpasse les méthodes utilisées jusqu'ici sur le plan de la qualité image. La validation exhaustive que nous avons menée, confirme que nous avions atteint un niveau compatible avec son introduction commerciale. Le logiciel qui en résulte est désormais installé sur un grand nombre de sites cliniques, ou il est régulièrement utilisé. La méthode de visualisation 3D des stents que nous proposons utilise les marqueurs pour effectuer une reconstruction tomographique compensée en mouvement. Nous exposons des résultats préliminaires sur une base de 22 cas cliniques. Il semble que notre méthode surpasse les méthodes précédemment employées aussi bien du point de vue de la qualité d’image que de l'automatisation. Les méthodes de visualisation des stents que nous proposons s’appuient sur la segmentation de la portion du guide qui traverse le stent. Nous proposons un nouvel outil pour le traitement de telles structures curvilignes que nous appelons : l'Image de Chemins Polygonaux (acronyme PPI en anglais). Cet outil repose sur la notion de chemin localement optimal. L'un des principaux avantages du PPI est d’unir dans un même cadre différents concepts pré-existants. De plus, il permet de contrôler la régularité et la longueur des structures à traiter avec une paramétrisation simple et intuitive. Avant de tirer pleinement parti des performances du PPI nous proposons un schéma algorithmique efficace pour le calculer. Nous illustrons ces utilisation pour la segmentation automatique de guide où il surpasse les techniques existantes / Stent implantation is the most common treatment of coronary heart disease, one of the major causes of death worldwide. During a stenting procedure, the clinician inserts interventional devices inside the patient's vasculature. The navigation of the devices inside the patient's anatomy is monitored in real-time, under X-ray fluoroscopy. Three specific interventional devices play a key role in this procedure: the guide-wire, the angioplasty balloon and the stent. The guide-wire appears in the images as a thin curvilinear structure. The angioplasty balloon, that has two characteristic markerballs at its extremities, is mounted on the guide-wire. The stent is a 3D metallic mesh, whose appearance is complex in the fluoroscopic images. Stents are barely visible, but the proper assessment of their deployment is key to the procedure. The objective of the work presented in this thesis is twofold. On the first hand, we aim at designing, studying and validating image processing techniques that improve the visualization of stents. On the second hand, we study the processing of curvilinear structures (like guide-wires) for which we propose a new image processing technique. We present algorithms dedicated to the 2D and 3D visualization of stents. Since the stent is hardly visible, we do not intend to directly locate it by image processing means in the images. The position and motion of the stent are inferred from the location of two landmarks: the angioplasty balloon and of the guide-wire, which have characteristic shapes. To this aim, we perform automated detection, tracking and registration of these landmarks. The cornerstone of our 2D stent visualization enhancement technique is the use of the landmarks to perform motion compensated noise reduction. We evaluated the performance of this technique for 2D stent visualization over a large database of clinical data (nearly 200 cases). The results demonstrate that our method outperforms previous state of the art techniques in terms of image quality. A comprehensive validation confirmed that we reached the level of performance required for the commercial introduction of our algorithm. It is currently deployed in a large number of clinical sites worldwide. The 3D stent visualization that we propose, uses the landmarks to achieve motion compensated tomographic reconstruction. We show preliminary results over 22 clinical cases. Our method seems to outperform previous state of the art techniques both in terms of automation and image quality. The previous stent visualization methods involve the segmentation of the part of the guide-wire extending through the stent. We propose a generic tool to process such curvilinear structures that we call the Polygonal Path Image (PPI). The PPI relies on the concept of locally optimal paths. One of its main advantages is that it unifies the concepts of several previous state of the art techniques in a single formalism. Moreover the PPI enables to control the smoothness and the length of the structures to segment. Its parametrization is simple and intuitive. In order to fully benefit from the PPI, we propose an efficient scheme to compute it. We demonstrate its applicability for the task of automated guide-wire segmentation, for which it outperforms previous state of the art techniques
23

Adaptive dim point target detection and tracking infrared images

DeMars, Thomas V. 12 1900 (has links)
Approved for public release; distribution is unlimited / The thesis deals with the detection and tracking of dim point targets in infrared images. Research topics include image process modeling with adaptive two-dimensional Least Mean Square (LMS) and Recursive Least Squares (RLS) prediction filters. Target detection is performed by significance testing the prediction error residual. A pulse tracker is developed which may be adjusted to discriminate target dynamics. The methods are applicable to detection and tracking in other spectral bands. / http://archive.org/details/adaptivedimpoint00dema / Major, United States Marine Corps
24

Using the organizational and narrative thread structures in an e-book to support comprehension

Sun, Yixing January 2007 (has links)
Stories, themes, concepts and references are organized structurally and purposefully in most books. A person reading a book needs to understand themes and concepts within the context. Schank’s Dynamic Memory theory suggested that building on existing memory structures is essential to cognition and learning. Pirolli and Card emphasized the need to provide people with an independent and improved ability to access and understand information in their information seeking activities. Through a review of users’ reading behaviours and of existing e-Book user interfaces, we found that current e-Book browsers provide minimal support for comprehending the content of large and complex books. Readers of an e-Book need user interfaces that present and relate the organizational and narrative structures, and moreover, reveal the thematic structures. This thesis addresses the problem of providing readers with effective scaffolding of multiple structures of an e-Book in the user interface to support reading for comprehension. Recognising a story or topic as the basic unit in a book, we developed novel story segmentation techniques for discovering narrative segments, and adapted story linking techniques for linking narrative threads in semi-structured linear texts of an e-Book. We then designed an e-Book user interface to present the complex structures of the e-Book, as well as to assist the reader to discover these structures. We designed and developed evaluation methodologies to investigate reading and comprehension in e-Books, in order to assess the effectiveness of this user interface. We designed semi-directed reading tasks using a Story-Theme Map, and a set of corresponding measurements for the answers. We conducted user evaluations with book readers. Participants were asked to read stories, to browse and link related stories, and to identify major themes of stories in an e-Book. This thesis reports the experimental design and results in detail. The results confirmed that the e-Book interface helped readers perform reading tasks more effectively. The most important and interesting finding is that the interface proved to be more helpful to novice readers who had little background knowledge of the book. In addition, each component that supported the user interface was evaluated separately in a laboratory setting and, these results too are reported in the thesis.
25

Object Detection and Tracking Using Uncalibrated Cameras

Amara, Ashwini 14 May 2010 (has links)
This thesis considers the problem of tracking an object in world coordinates using measurements obtained from multiple uncalibrated cameras. A general approach to track the location of a target involves different phases including calibrating the camera, detecting the object's feature points over frames, tracking the object over frames and analyzing object's motion and behavior. The approach contains two stages. First, the problem of camera calibration using a calibration object is studied. This approach retrieves the camera parameters from the known locations of ground data in 3D and their corresponding image coordinates. The next important part of this work is to develop an automated system to estimate the trajectory of the object in 3D from image sequences. This is achieved by combining, adapting and integrating several state-of-the-art algorithms. Synthetic data based on a nearly constant velocity object motion model is used to evaluate the performance of camera calibration and state estimation algorithms.
26

Vehicle detection and tracking using wireless sensors and video cameras

Bandarupalli, Sowmya 06 August 2009 (has links)
This thesis presents the development of a surveillance testbed using wireless sensors and video cameras for vehicle detection and tracking. The experimental study includes testbed design and discusses some of the implementation issues in using wireless sensors and video cameras for a practical application. A group of sensor devices equipped with light sensors are used to detect and localize the position of moving vehicle. Background subtraction method is used to detect the moving vehicle from the video sequences. Vehicle centroid is calculated in each frame. A non-linear minimization method is used to estimate the perspective transformation which project 3D points to 2D image points. Vehicle location estimates from three cameras are fused to form a single trajectory representing the vehicle motion. Experimental results using both sensors and cameras are presented. Average error between vehicle location estimates from the cameras and the wireless sensors is around 0.5ft.
27

Traitement d'images pour la ségrégation en transport de sédiments par charriage : morphologie et suivi d'objets / Image processing for segregation in bedload sediment transport : morphology and tracking

Lafaye de Micheaux, Hugo 04 May 2017 (has links)
Le transport de sédiments en rivières et torrents reste un phénomène mal compris en raison de la polydispersité des particules et de la ségrégation résultante. Il a été mené une étude expérimentale sur un canal permettant d’étudier la ségrégation en charriage d’un mélange de deux classes de billes. Le déplacement collectif des billes est enregistré sous la forme de séquences vidéos. Cette thèse traite des méthodes de traitement d’images développées pour analyser les données obtenues. Premièrement, nous avons développé une méthode de segmentation d’images pour étudier l’influence de l’infiltration de particules fines sur l’évolution d’un lit mobile. Avec cette méthode d’analyse, une étude expérimentale a permis de montrer que l’évolution de la pente du lit présente une décroissance exponentielle. Deuxièmement, nous avons optimisé les algorithmes déterministes de suivi de particules pour permettre l’étude des trajectoires sur l’intégralité du phénomène de ségrégation, ce qui n’était pas possible dans les travaux précédemment effectués à Irstea. Nous avons de plus mis en place des mesures d’évaluation et conçu des vérités terrains afin d’apprécier la qualité des résultats. Des gains de temps, cohérence, précision et mémoire ont été quantifiés. Troisièmement, nous avons développé un nouvel algorithme basé sur le filtrage particulaire à modèles multiples pour mieux gérer les dynamiques complexes des particules et gagner en robustesse. Cette approche permet de prendre en compte les erreurs du détecteur, les corriger et ainsi éviter des difficultés lors du suivi de trajectoires que nous rencontrons notamment avec l’algorithme déterministe / Sediment transport in rivers and mountain streams remains poorly understood partly due to the polydispersity of particles and resulting segregation. Experiments in a channel were carried out to study bedload transport of bimodal bead mixtures. The behavior of the beads is recorded through video sequences. This work is about the development of image processing methods to analyse the obtained data. Firstly, we developed a method of image segmentation to study the infiltration of fine particles and its influence on the evolution of bed mobility. Thanks to this method, an experimental study shows that the bed slope evolution follows an exponential decay. Secondly, we optimised deterministic tracking algorithms to enable the study of trajectories on long-duration phenomena of segregation, which was not possible with previous work done at Irstea. Moreover we set up relevant evaluation measures and elaborated ground truth sequences to quantify the results. We observed benefits in execution time, consistency, precision and memory. Thirdly, we developed a new algorithm based on multiple model particle filtering to better deal with complex dynamics of particles and to gain robustness. This approach allows taking unreliable detections into account, correcting them and thus avoiding difficulties in the target tracking as encountered with the deterministic algorithm
28

Observatoire de trajectoire de piétons à l'aide d'un réseau de télémètre laser à balayage : application à l'intérieur des bâtiments / Pedestrian path monitoring using a scanning laser rangefinder network : application inside buildings

Adiaviakoye, Ladji 10 September 2015 (has links)
Dans la vie de tous les jours, nous assistons à des chorégraphies surprenantes dans les déplacements de foules de piétons. Les mécanismes qui sont à la base de la dynamique des foules humaines restent peu connus. Un des modes d’observation des piétons consiste à réaliser des mesures en conditions réelles (exemple : aéroport, gare, etc.). La trajectoire empruntée, la vitesse et l’accélération sont les données de base pour une telle analyse. C’est dans ce contexte que se placent nos travaux qui combinent étroitement observations en milieu naturel et expérimentations contrôlées. Nous avons proposé un système pour le suivi de plusieurs piétons dans un environnement fermé, à l’aide d’un réseau de télémètres lasers à balayage. Nous avons fait avancer l’état de l’art sur quatre plans.Premièrement, nous avons introduit une méthode de fusion automatique des données, permettant de discriminer les objets statiques (murs, poteaux, etc.) et aussi d’augmenter le taux de détection.Deuxièmement, nous avons proposé une méthode de détection non paramétrique basée sur la modélisation de la marche. L’algorithme estime la position du piéton, que celui-ci soit immobile ou en mouvement.Finalement, notre suivi repose sur la méthode Rao-Blackwell Monte Carlo Association de Données, avec la particularité de suivre un nombre variable de piétons.L’algorithme a été évalué quantitativement par des expériences de comportement social à différents niveaux de densité. Ces expériences ont eu lieu dans une école, près de 300 piétons ont été suivis dont une trentaine simultanément. / In everyday life, we witness surprising choreographies in the movements of crowds of pedestrians. The mechanisms that underlie the dynamics of human crowd dynamics remain poorly understood. One of the ways of observing pedestrians consists in taking measurements in real conditions (e. g. airport, station, etc.). The trajectory, speed and acceleration are the basic data for such an analysis. It is in this context that our work is placed, which closely combines observations in the natural environment with controlled experiments. We proposed a system for tracking multiple pedestrians in a closed environment using a network of scanning laser rangefinders. We have advanced the state of the art on four levels: first, we have introduced an automatic data fusion method to discriminate static objects (walls, poles, etc.) and also to increase the detection rate; second, we have proposed a non-parametric detection method based on walking modeling. The algorithm estimates the position of the pedestrian, whether stationary or moving, and finally, our monitoring is based on the Rao-Blackwell Monte Carlo Association Data Method, with the particularity of tracking a variable number of pedestrians, which was quantitatively evaluated by experiments in social behaviour at different levels of density. These experiments took place in a school, nearly 300 pedestrians were followed, about thirty of them simultaneously.
29

文件距離為基礎kNN分群技術與新聞事件偵測追蹤之研究 / A study of relative text-distance-based kNN clustering technique and news events detection and tracking

陳柏均, Chen, Po Chun Unknown Date (has links)
新聞事件可描述為「一個時間區間內、同一主題的相似新聞之集合」,而新聞大多僅是一完整事件的零碎片段,其內容也易受到媒體立場或撰寫角度不同有所差異;除此之外,龐大的新聞量亦使得想要瞭解事件全貌的困難度大增。因此,本研究將利用文字探勘技術群聚相關新聞為事件,以增進新聞所帶來的價值。 分類分群為文字探勘中很常見的步驟,亦是本研究將新聞群聚成事件所運用到的主要方法。最近鄰 (k-nearest neighbor, kNN)搜尋法可視為分類法中最常見的演算法之一,但由於kNN在分類上必須要每篇新聞兩兩比較並排序才得以選出最近鄰,這也產生了kNN在實作上的效能瓶頸。本研究提出了一個「建立距離參考基準點」的方法RTD-based kNN (Relative Text-Distance-based kNN),透過在向量空間中建立一個基準點,讓所有文件利用與基準點的相對距離建立起遠近的關係,使得在選取前k個最近鄰之前,直接以相對關係篩選出較可能的候選文件,進而選出前k個最近鄰,透過相對距離的概念減少比較次數以改善效率。 本研究於Google News中抽取62個事件(共742篇新聞),並依其分群結果作為測試與評估依據,以比較RTD-based kNN與kNN新聞事件分群時的績效。實驗結果呈現出RTD-based kNN的基準點以常用字字彙建立較佳,分群後的再合併則有助於改善結果,而在RTD-based kNN與kNN的F-measure並無顯著差距(α=0.05)的情況下,RTD-based kNN的運算時間低於kNN達28.13%。顯示RTD-based kNN能提供新聞事件分群時一個更好的方法。最後,本研究提供一些未來研究之方向。 / News Events can be described as "the aggregation of many similar news that describe the particular incident within a specific timeframe". Most of news article portraits only a part of a passage, and many of the content are bias because of different media standpoint or different viewpoint of reporters; in addition, the massive news source increases complexity of the incident. Therefore, this research paper employs Text Mining Technique to cluster similar news to a events that can value added a news contributed. Classification and Clustering technique is a frequently used in Text Mining, and K-nearest neighbor(kNN) is one of most common algorithms apply in classification. However, kNN requires massive comparison on each individual article, and it becomes the performance bottlenecks of kNN. This research proposed Relative Text-Distance-based kNN(RTD-based kNN), the core concept of this method is establish a Base, a distance reference point, through a Vector Space, all documents can create the distance relationship through the relative distance between itself and base. Through the concept of relative distance, it can decrease the number of comparison and improve the efficiency. This research chooses a sample of 62 events (with total of 742 news articles) from Google News for the test and evaluation. Under the condition of RTD-based kNN and kNN with a no significant difference in F-measure (α=0.05), RTD-based kNN out perform kNN in time decreased by 28.13%. This confirms RTD-based kNN is a better method in clustering news event. At last, this research provides some of the research aspect for the future.
30

Performance Improvement Of A 3d Reconstruction Algorithm Using Single Camera Images

Kilic, Varlik 01 July 2005 (has links) (PDF)
In this study, it is aimed to improve a set of image processing techniques used in a previously developed method for reconstructing 3D parameters of a secondary passive target using single camera images. This 3D reconstruction method was developed and implemented on a setup consisting of a digital camera, a computer, and a positioning unit. Some automatic target recognition techniques were also included in the method. The passive secondary target used is a circle with two internal spots. In order to achieve a real time target detection, the existing binarization, edge detection, and ellipse detection algorithms are debugged, modified, or replaced to increase the speed, to eliminate the run time errors, and to become compatible for target tracking. The overall speed of 20 Hz is achieved for 640x480 pixel resolution 8 bit grayscale images on a 2.8 GHz computer A novel target tracking method with various tracking strategies is introduced to reduce the search area for target detection and to achieve a detection and reconstruction speed at the maximum frame rate of the hardware. Based on the previously suggested lens distortion model, distortion measurement, distortion parameters determination, and distortion correction methods for both radial and tangential distortions are developed. By the implementation of this distortion correction method, the accuracy of the 3D reconstruction method is enhanced. The overall 3D reconstruction method is implemented in an integrated software and hardware environment as a combination of the methods with the best performance among their alternatives. This autonomous and real time system is able to detect the secondary passive target and reconstruct its 3D configuration parameters at a rate of 25 Hz. Even for extreme conditions, in which it is difficult or impossible to detect the target, no runtime failures are observed.

Page generated in 0.1569 seconds