• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 74
  • 74
  • 14
  • 13
  • 12
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

An intuitive motion-based input model for mobile devices

Richards, Mark Andrew January 2006 (has links)
Traditional methods of input on mobile devices are cumbersome and difficult to use. Devices have become smaller, while their operating systems have become more complex, to the extent that they are approaching the level of functionality found on desktop computer operating systems. The buttons and toggle-sticks currently employed by mobile devices are a relatively poor replacement for the keyboard and mouse style user interfaces used on their desktop computer counterparts. For example, when looking at a screen image on a device, we should be able to move the device to the left to indicate we wish the image to be panned in the same direction. This research investigates a new input model based on the natural hand motions and reactions of users. The model developed by this work uses the generic embedded video cameras available on almost all current-generation mobile devices to determine how the device is being moved and maps this movement to an appropriate action. Surveys using mobile devices were undertaken to determine both the appropriateness and efficacy of such a model as well as to collect the foundational data with which to build the model. Direct mappings between motions and inputs were achieved by analysing users' motions and reactions in response to different tasks. Upon the framework being completed, a proof of concept was created upon the Windows Mobile Platform. This proof of concept leverages both DirectShow and Direct3D to track objects in the video stream, maps these objects to a three-dimensional plane, and determines device movements from this data. This input model holds the promise of being a simpler and more intuitive method for users to interact with their mobile devices, and has the added advantage that no hardware additions or modifications are required the existing mobile devices.
62

Détection et analyse du mouvement respiratoire à partir d'images fluoroscopiques en radiothérapie / Detection and analysis of respiratory motion from fluoroscopic images in radiotherapy

Grezes-Besset, Louise 09 December 2011 (has links)
Le principe de la radiothérapie est de délivrer le maximum de dose de rayons X à la tumeur en épargnant au mieux les tissus sains environnants. Dans le cas du cancer du poumon, les mouvements respiratoires représentent une difficulté majeure. L’imagerie tomodensitométrique (TDM) 4D fournit des informations de mouvement spécifique à chaque patient qui peuvent servir de base pour la construction de modèles de mouvement respiratoire. La disponibilité dans les salles de traitement d’imageurs tomographiques embarqués sur les accélérateurs linéaires permet une estimation direct du mouvement et offre des informations plus précises. Un tel système d’imagerie permet entre-autre d’acquérir des images fluoroscopiques : ensemble de projections radiographiques 2D acquises au cours du temps et sous le même angle de vue. Notre approche s’intègre dans des systèmes de synchronisation de l’irradiation avec la respiration. Actuellement, cette technique existe en utilisant pour signal de synchronisation soit un signal externe, soit un signal interne issu du mouvement de marqueurs implantés autour de la tumeur. Notre approche permet d’obtenir un signal de synchronisation obtenu à partir de données internes sans marqueurs implantés. Dans ce cadre, nous avons expérimenté, développé puis évalué 3 méthodes de détection du mouvement à partir de séquences fluoroscopiques. Ces méthodes sont basées respectivement sur la variation de l’intensité, l’extraction de la hauteur du diaphragme et le suivi de blocst. A partir d’un algorithme de mise en correspondance de blocs, nous avons étudié l’homogénéité du mouvement apparent et déterminé, sans a priori géométrique, des régions où le mouvement est uniforme. Nous avons ensuite étudié la corrélation entre le signal interne extrait sur des séquences fluoroscopiques, et un signal extrait d’une vidéo-caméra synchronisée aux séquences fluoroscopiques assimilable à un signal externe. Dans une dernière partie, nous proposons d’estimer le mouvement 3D de la tumeur à partir d’un modèle de mouvement a priori élaboré dans une étape de pré-traitement à l’aide d’images TDM 4D et du signal respiratoire acquis dans la salle de traitement. L’intérêt de notre approche est qu’elle ne nécessite pas de marqueurs implantés ce qui la rend moins invasive que de nombreuses autres techniques. D’autre part, nous proposons un suivi 2D donc potentiellement rapide, mais basé sur un modèle 3D sous-jacent permettant ainsi de retrouver le maximum d’information. Cliniquement, notre approche permettrait de réaliser une adaptation quotidienne aux mouvements inter-sessions. Une des limites de notre approche est qu’elle nécessite une prise d’images ionisantes en continue. Un système hybride basée sur la combinaison d’un signal interne et d’un signal externe permettrait de limiter la dose additionnelle. Des efforts supplémentaires sur la réduction du temps de calcul sont encore nécessaires pour espérer guider un traitement par une telle approche. / Radiotherapy consist of locally exposing target tumor cells to ionizing radiation with the aim of causing irreparable damage to their DNA. Respiratory motion introduces uncertainties in radiation therapy fo lung cancer treatment. The main risks are an over-irradiation of soft tissue and under-irradiation of tumor. The principal aim of this work is to provide a contribution to the extraction of quantitative motion parameters which can help to improve treatment planning. Recent developments have led to the routine acquisition of four-dimensional computed tomography (4DCT) and cone-beam computed tomography (CBCT) for the planning and delivery of certain treatment strategies. The availability of these images over the course of treatment make them particularly suited for providing patient-specific motion information and deriving motion models. Cone-beam is mainly use for its 3D capacity with the rotation system. But it can also acquire fluoroscopic sequences : a set of 2D projections acquired during time and under the same angle of projection. Our approach take place in the gating category treatment where the dose delivery is synchronized with respiration. For lung cancer treatment with gated radiotherapy, tracking apparent respiratory motion in fluoroscopic images is an important step. It is frequently realize using implanted marker next to the tumor. The first purpose of this study is to extract respiratory motion during treatment delivery from fluoroscopy images without implanted markers. We developed 3 methods inspired from literature and compared them. These methods are respectively based on variation intensity in the lung, diaphragm motion extraction and block matching. For each method, we obtain a signal correlated to the respiratory motion. In a second part, we study the spatial variation of the motion in the lung. We try to determine regions where the extracted apparent motion is homogeneous and reliable. Using an adapted block-matching algorithm on fluoroscopic sequences, we extracted individual point trajectories in region of interest corresponding to the lung and classified them using the k-means++ clustering algorithm. We then studied the apparent motion separately in each determined region. As a result, we obtained regions with homogeneous motion. In a third part, we are interested in the correlation of internal and external motion. Finally, in the last section, we propose to estimate 3D motion tumor from an a priori motion model obtained with the planning 4DCT and the respiratory signal extracted in the treatment room.
63

Apprentissage statistique pour la personnalisation de modèles cardiaques à partir de données d’imagerie / Statistical learning for image-based personalization of cardiac models

Le Folgoc, Loïc 27 November 2015 (has links)
Cette thèse porte sur un problème de calibration d'un modèle électromécanique de cœur, personnalisé à partir de données d'imagerie médicale 3D+t ; et sur celui - en amont - de suivi du mouvement cardiaque. A cette fin, nous adoptons une méthodologie fondée sur l'apprentissage statistique. Pour la calibration du modèle mécanique, nous introduisons une méthode efficace mêlant apprentissage automatique et une description statistique originale du mouvement cardiaque utilisant la représentation des courants 3D+t. Notre approche repose sur la construction d'un modèle statistique réduit reliant l'espace des paramètres mécaniques à celui du mouvement cardiaque. L'extraction du mouvement à partir d'images médicales avec quantification d'incertitude apparaît essentielle pour cette calibration, et constitue l'objet de la seconde partie de cette thèse. Plus généralement, nous développons un modèle bayésien parcimonieux pour le problème de recalage d'images médicales. Notre contribution est triple et porte sur un modèle étendu de similarité entre images, sur l'ajustement automatique des paramètres du recalage et sur la quantification de l'incertitude. Nous proposons une technique rapide d'inférence gloutonne, applicable à des données cliniques 4D. Enfin, nous nous intéressons de plus près à la qualité des estimations d'incertitude fournies par le modèle. Nous comparons les prédictions du schéma d'inférence gloutonne avec celles données par une procédure d'inférence fidèle au modèle, que nous développons sur la base de techniques MCMC. Nous approfondissons les propriétés théoriques et empiriques du modèle bayésien parcimonieux et des deux schémas d'inférence / This thesis focuses on the calibration of an electromechanical model of the heart from patient-specific, image-based data; and on the related task of extracting the cardiac motion from 4D images. Long-term perspectives for personalized computer simulation of the cardiac function include aid to the diagnosis, aid to the planning of therapy and prevention of risks. To this end, we explore tools and possibilities offered by statistical learning. To personalize cardiac mechanics, we introduce an efficient framework coupling machine learning and an original statistical representation of shape & motion based on 3D+t currents. The method relies on a reduced mapping between the space of mechanical parameters and the space of cardiac motion. The second focus of the thesis is on cardiac motion tracking, a key processing step in the calibration pipeline, with an emphasis on quantification of uncertainty. We develop a generic sparse Bayesian model of image registration with three main contributions: an extended image similarity term, the automated tuning of registration parameters and uncertainty quantification. We propose an approximate inference scheme that is tractable on 4D clinical data. Finally, we wish to evaluate the quality of uncertainty estimates returned by the approximate inference scheme. We compare the predictions of the approximate scheme with those of an inference scheme developed on the grounds of reversible jump MCMC. We provide more insight into the theoretical properties of the sparse structured Bayesian model and into the empirical behaviour of both inference schemes
64

Multimodal high-resolution mapping of contracting intact Langendorff-perfused hearts

Schröder-Schetelig, Johannes 07 September 2020 (has links)
No description available.
65

Soluciones tecnologicas para procesos de rehabilitacion y evaluacion kinesica / Technological solutions for processes of rehabilitation and kinesic evaluation

Bailon Perfumo, Pedro Juan, Ortiz Reyes, Walter Eduardo 10 December 2019 (has links)
El presente proyecto busca proponer una cartera de proyectos tecnológicos basados en la utilización de diversas tendencias tecnológicas como “motion recognition” (utilizado en consolas de videojuegos), sensores de presión e infrarrojos y procesamiento de imágenes para programas de rehabilitación y evaluación kinésica. Los proyectos propuestos se enfocan en las necesidades de los procesos de evaluación kinésica en el deporte, evaluación postural, identificación de patologías del pie y seguimiento del paciente en rehabilitación. Como resultado final, el equipo de este proyecto busca dar un aporte basado en las tecnologías de información a un campo de suma importancia como lo es la rehabilitación y evaluación kinésica al definir una cartera de proyectos que incluya tanto el detalle de los programas de rehabilitación y procesos kinésicos estudiados, así como las posibles soluciones tecnológicas a ser desarrolladas. / This project intends to propose a portfolio of technological projects based on diverse technological trends such as motion recognition (used in videogame consoles), infrared and pressure sensors and image processing for kinesthetic rehab and evaluation programs. The projects proposed focus in the needs of the kinesthetic evaluation in sports, postural evaluation foot pathology identification and follow-up of the patient in rehabilitation. As result, the team of this project intends to give a contribution based on information technologies to a field of great importance such as kinesthetic rehabilitation and evaluation by defining a portfolio of projects that includes both the detail of rehabilitation programs and kinesthetic processes studied, as well as possible technological solutions to be developed. / Tesis
66

Kalman Filter Based Approach : Real-time Control-based Human Motion Prediction in Teleoperation / Kalman Filter baserad metod : Realtids uppskattningar av Kontrollbaserad Mänsklig Rörelse i Teleoperationen

Fan, Zheyu Jerry January 2016 (has links)
This work is to investigate the performance of two Kalman Filter Algorithms, namely Linear Kalman Filter and Extended Kalman Filter on control-based human motion prediction in a real-time teleoperation. The Kalman Filter Algorithm has been widely used in research areas of motion tracking and GPS-navigation. However, the potential of human motion prediction by utilizing this algorithm is rarely being mentioned. Combine with the known issue - the delay issue in today’s teleoperation services, the author decided to build a prototype of simple teleoperation model based on the Kalman Filter Algorithm with the aim of eliminated the unsynchronization between the user’s inputs and the visual frames, where all the data were transferred over the network. In the first part of the thesis, two types of Kalman Filter Algorithm are applied on the prototype to predict the movement of the robotic arm based on the user’s motion applied on a Haptic Device. The comparisons in performance among the Kalman Filters have also been focused. In the second part, the thesis focuses on optimizing the motion prediction which based on the results of Kalman filtering by using the smoothing algorithm. The last part of the thesis examines the limitation of the prototype, such as how much the delays are accepted and how fast the movement speed of the Phantom Haptic can be, to still be able to obtain reasonable predations with acceptable error rate.   The results show that the Extended Kalman Filter has achieved more advantages in motion prediction than the Linear Kalman Filter during the experiments. The unsynchronization issue has been effectively improved by applying the Kalman Filter Algorithm on both state and measurement models when the latency is set to below 200 milliseconds. The additional smoothing algorithm further increases the accuracy. More important, it also solves shaking issue on the visual frames on robotic arm which is caused by the wavy property of the Kalman Filter Algorithm. Furthermore, the optimization method effectively synchronizes the timing when robotic arm touches the interactable object in the prediction.   The method which is utilized in this research can be a good reference for the future researches in control-based human motion tracking and prediction. / Detta arbete fokuserar på att undersöka prestandan hos två Kalman Filter Algoritmer, nämligen Linear Kalman Filter och Extended Kalman Filter som används i realtids uppskattningar av kontrollbaserad mänsklig rörelse i teleoperationen. Dessa Kalman Filter Algoritmer har används i stor utsträckning forskningsområden i rörelsespårning och GPS-navigering. Emellertid är potentialen i uppskattning av mänsklig rörelse genom att utnyttja denna algoritm sällan nämnas. Genom att kombinera med det kända problemet – fördröjningsproblem i dagens teleoperation tjänster beslutar författaren att bygga en prototyp av en enkel teleoperation modell vilket är baserad på Kalman Filter algoritmen i syftet att eliminera icke-synkronisering mellan användarens inmatningssignaler och visuella information, där alla data överfördes via nätverket. I den första delen av avhandlingen appliceras både Kalman Filter Algoritmer på prototypen för att uppskatta rörelsen av robotarmen baserat på användarens rörelse som anbringas på en haptik enhet. Jämförelserna i prestandan bland de Kalman Filter Algoritmerna har också fokuserats. I den andra delen fokuserar avhandlingen på att optimera uppskattningar av rörelsen som baserat på resultaten av Kalman-filtrering med hjälp av en utjämningsalgoritm. Den sista delen av avhandlingen undersökes begräsning av prototypen, som till exempel hur mycket fördröjningar accepteras och hur snabbt den haptik enheten kan vara, för att kunna erhålla skäliga uppskattningar med acceptabel felfrekvens.   Resultaten visar att den Extended Kalman Filter har bättre prestandan i rörelse uppskattningarna än den Linear Kalman Filter under experimenten. Det icke-synkroniseringsproblemet har förbättrats genom att tillämpa de Kalman Filter Algoritmerna på både statliga och värderingsmodeller när latensen är inställd på under 200 millisekunder. Den extra utjämningsalgoritmen ökar ytterligare noggrannheten. Denna algoritm löser också det skakande problem hos de visuella bilder på robotarmen som orsakas av den vågiga egenskapen hos Kalman Filter Algoritmen. Dessutom effektivt synkroniserar den optimeringsmetoden tidpunkten när robotarmen berör objekten i uppskattningarna.   Den metod som används i denna forskning kan vara en god referens för framtida undersökningar i kontrollbaserad rörelse- spåning och uppskattning.
67

Event-Driven Motion Compensation in Positron Emission Tomography: Development of a Clinically Applicable Method

Langner, Jens 11 August 2009 (has links) (PDF)
Positron emission tomography (PET) is a well-established functional imaging method used in nuclear medicine. It allows for retrieving information about biochemical and physiological processes in vivo. The currently possible spatial resolution of PET is about 5 mm for brain acquisitions and about 8 mm for whole-body acquisitions, while recent improvements in image reconstruction point to a resolution of 2 mm in the near future. Typical acquisition times range from minutes to hours due to the low signal-to-noise ratio of the measuring principle, as well as due to the monitoring of the metabolism of the patient over a certain time. Therefore, patient motion increasingly limits the possible spatial resolution of PET. In addition, patient immobilisations are only of limited benefit in this context. Thus, patient motion leads to a relevant resolution degradation and incorrect quantification of metabolic parameters. The present work describes the utilisation of a novel motion compensation method for clinical brain PET acquisitions. By using an external motion tracking system, information about the head motion of a patient is continuously acquired during a PET acquisition. Based on the motion information, a newly developed event-based motion compensation algorithm performs spatial transformations of all registered coincidence events, thus utilising the raw data of a PET system - the so-called `list-mode´ data. For routine acquisition of this raw data, methods have been developed which allow for the first time to acquire list-mode data from an ECAT Exact HR+ PET scanner within an acceptable time frame. Furthermore, methods for acquiring the patient motion in clinical routine and methods for an automatic analysis of the registered motion have been developed. For the clinical integration of the aforementioned motion compensation approach, the development of additional methods (e.g. graphical user interfaces) was also part of this work. After development, optimisation and integration of the event-based motion compensation in clinical use, analyses with example data sets have been performed. Noticeable changes could be demonstrated by analysis of the qualitative and quantitative effects after the motion compensation. From a qualitative point of view, image artefacts have been eliminated, while quantitatively, the results of a tracer kinetics analysis of a FDOPA acquisition showed relevant changes in the R0k3 rates of an irreversible reference tissue two compartment model. Thus, it could be shown that an integration of a motion compensation method which is based on the utilisation of the raw data of a PET scanner, as well as the use of an external motion tracking system, is not only reasonable and possible for clinical use, but also shows relevant qualitative and quantitative improvement in PET imaging. / Die Positronen-Emissions-Tomographie (PET) ist ein in der Nuklearmedizin etabliertes funktionelles Schnittbildverfahren, das es erlaubt Informationen über biochemische und physiologische Prozesse in vivo zu erhalten. Die derzeit erreichbare räumliche Auflösung des Verfahrens beträgt etwa 5 mm für Hirnaufnahmen und etwa 8 mm für Ganzkörperaufnahmen, wobei erste verbesserte Bildrekonstruktionsverfahren eine Machbarkeit von 2 mm Auflösung in Zukunft möglich erscheinen lassen. Durch das geringe Signal/Rausch-Verhältnis des Messverfahrens, aber auch durch die Tatsache, dass der Stoffwechsel des Patienten über einen längeren Zeitraum betrachtet wird, betragen typische PET-Aufnahmezeiten mehrere Minuten bis Stunden. Dies hat zur Folge, dass Patientenbewegungen zunehmend die erreichbare räumliche Auflösung dieses Schnittbildverfahrens limitieren. Eine Immobilisierung des Patienten zur Reduzierung dieser Effekte ist hierbei nur bedingt hilfreich. Es kommt daher zu einer relevanten Auflösungsverschlechterung sowie zu einer Verfälschung der quantifizierten Stoffwechselparameter. Die vorliegende Arbeit beschreibt die Nutzbarmachung eines neuartigen Bewegungskorrekturverfahrens für klinische PET-Hirnaufnahmen. Mittels eines externen Bewegungsverfolgungssystems wird während einer PET-Untersuchung kontinuierlich die Kopfbewegung des Patienten registriert. Anhand dieser Bewegungsdaten führt ein neu entwickelter event-basierter Bewegungskorrekturalgorithmus eine räumliche Korrektur aller registrierten Koinzidenzereignisse aus und nutzt somit die als "List-Mode" bekannten Rohdaten eines PET Systems. Für die Akquisition dieser Daten wurden eigens Methoden entwickelt, die es erstmals erlauben, diese Rohdaten von einem ECAT Exact HR+ PET Scanner innerhalb eines akzeptablen Zeitraumes zu erhalten. Des Weiteren wurden Methoden für die klinische Akquisition der Bewegungsdaten sowie für die automatische Auswertung dieser Daten entwickelt. Ebenfalls Teil der Arbeit waren die Entwicklung von Methoden zur Integration in die klinische Routine (z.B. graphische Nutzeroberflächen). Nach der Entwicklung, Optimierung und Integration der event-basierten Bewegungskorrektur für die klinische Nutzung wurden Analysen anhand von Beispieldatensätzen vorgenommen. Es zeigten sich bei der Auswertung sowohl der qualitativen als auch der quantitativen Effekte deutliche Änderungen. In qualitativer Hinsicht wurden Bildartefakte eliminiert; bei der quantitativen Auswertung einer FDOPA Messung zeigte sich eine revelante Änderung der R0k3 Einstromraten eines irreversiblen Zweikompartment-Modells mit Referenzgewebe. Es konnte somit gezeigt werden, dass eine Integration einer Bewegungskorrektur unter Zuhilfenahme der Rohdaten eines PET Systems sowie unter Nutzung eines externen Verfolgungssystems nicht nur sinnvoll und in der klinischen Routine machbar ist, sondern auch zu maßgeblichen qualitativen und quantitativen Verbesserungen in der PET-Bildgebung beitragen kann.
68

Self-Organizing Neural Visual Models to Learn Feature Detectors and Motion Tracking Behaviour by Exposure to Real-World Data

Yogeswaran, Arjun January 2018 (has links)
Advances in unsupervised learning and deep neural networks have led to increased performance in a number of domains, and to the ability to draw strong comparisons between the biological method of self-organization conducted by the brain and computational mechanisms. This thesis aims to use real-world data to tackle two areas in the domain of computer vision which have biological equivalents: feature detection and motion tracking. The aforementioned advances have allowed efficient learning of feature representations directly from large sets of unlabeled data instead of using traditional handcrafted features. The first part of this thesis evaluates such representations by comparing regularization and preprocessing methods which incorporate local neighbouring information during training on a single-layer neural network. The networks are trained and tested on the Hollywood2 video dataset, as well as the static CIFAR-10, STL-10, COIL-100, and MNIST image datasets. The induction of topography or simple image blurring via Gaussian filters during training produces better discriminative features as evidenced by the consistent and notable increase in classification results that they produce. In the visual domain, invariant features are desirable such that objects can be classified despite transformations. It is found that most of the compared methods produce more invariant features, however, classification accuracy does not correlate to invariance. The second, and paramount, contribution of this thesis is a biologically-inspired model to explain the emergence of motion tracking behaviour in early development using unsupervised learning. The model’s self-organization is biased by an original concept called retinal constancy, which measures how similar visual contents are between successive frames. In the proposed two-layer deep network, when exposed to real-world video, the first layer learns to encode visual motion, and the second layer learns to relate that motion to gaze movements, which it perceives and creates through bi-directional nodes. This is unique because it uses general machine learning algorithms, and their inherent generative properties, to learn from real-world data. It also implements a biological theory and learns in a fully unsupervised manner. An analysis of its parameters and limitations is conducted, and its tracking performance is evaluated. Results show that this model is able to successfully follow targets in real-world video, despite being trained without supervision on real-world video.
69

Event-Driven Motion Compensation in Positron Emission Tomography: Development of a Clinically Applicable Method

Langner, Jens 28 July 2009 (has links)
Positron emission tomography (PET) is a well-established functional imaging method used in nuclear medicine. It allows for retrieving information about biochemical and physiological processes in vivo. The currently possible spatial resolution of PET is about 5 mm for brain acquisitions and about 8 mm for whole-body acquisitions, while recent improvements in image reconstruction point to a resolution of 2 mm in the near future. Typical acquisition times range from minutes to hours due to the low signal-to-noise ratio of the measuring principle, as well as due to the monitoring of the metabolism of the patient over a certain time. Therefore, patient motion increasingly limits the possible spatial resolution of PET. In addition, patient immobilisations are only of limited benefit in this context. Thus, patient motion leads to a relevant resolution degradation and incorrect quantification of metabolic parameters. The present work describes the utilisation of a novel motion compensation method for clinical brain PET acquisitions. By using an external motion tracking system, information about the head motion of a patient is continuously acquired during a PET acquisition. Based on the motion information, a newly developed event-based motion compensation algorithm performs spatial transformations of all registered coincidence events, thus utilising the raw data of a PET system - the so-called `list-mode´ data. For routine acquisition of this raw data, methods have been developed which allow for the first time to acquire list-mode data from an ECAT Exact HR+ PET scanner within an acceptable time frame. Furthermore, methods for acquiring the patient motion in clinical routine and methods for an automatic analysis of the registered motion have been developed. For the clinical integration of the aforementioned motion compensation approach, the development of additional methods (e.g. graphical user interfaces) was also part of this work. After development, optimisation and integration of the event-based motion compensation in clinical use, analyses with example data sets have been performed. Noticeable changes could be demonstrated by analysis of the qualitative and quantitative effects after the motion compensation. From a qualitative point of view, image artefacts have been eliminated, while quantitatively, the results of a tracer kinetics analysis of a FDOPA acquisition showed relevant changes in the R0k3 rates of an irreversible reference tissue two compartment model. Thus, it could be shown that an integration of a motion compensation method which is based on the utilisation of the raw data of a PET scanner, as well as the use of an external motion tracking system, is not only reasonable and possible for clinical use, but also shows relevant qualitative and quantitative improvement in PET imaging. / Die Positronen-Emissions-Tomographie (PET) ist ein in der Nuklearmedizin etabliertes funktionelles Schnittbildverfahren, das es erlaubt Informationen über biochemische und physiologische Prozesse in vivo zu erhalten. Die derzeit erreichbare räumliche Auflösung des Verfahrens beträgt etwa 5 mm für Hirnaufnahmen und etwa 8 mm für Ganzkörperaufnahmen, wobei erste verbesserte Bildrekonstruktionsverfahren eine Machbarkeit von 2 mm Auflösung in Zukunft möglich erscheinen lassen. Durch das geringe Signal/Rausch-Verhältnis des Messverfahrens, aber auch durch die Tatsache, dass der Stoffwechsel des Patienten über einen längeren Zeitraum betrachtet wird, betragen typische PET-Aufnahmezeiten mehrere Minuten bis Stunden. Dies hat zur Folge, dass Patientenbewegungen zunehmend die erreichbare räumliche Auflösung dieses Schnittbildverfahrens limitieren. Eine Immobilisierung des Patienten zur Reduzierung dieser Effekte ist hierbei nur bedingt hilfreich. Es kommt daher zu einer relevanten Auflösungsverschlechterung sowie zu einer Verfälschung der quantifizierten Stoffwechselparameter. Die vorliegende Arbeit beschreibt die Nutzbarmachung eines neuartigen Bewegungskorrekturverfahrens für klinische PET-Hirnaufnahmen. Mittels eines externen Bewegungsverfolgungssystems wird während einer PET-Untersuchung kontinuierlich die Kopfbewegung des Patienten registriert. Anhand dieser Bewegungsdaten führt ein neu entwickelter event-basierter Bewegungskorrekturalgorithmus eine räumliche Korrektur aller registrierten Koinzidenzereignisse aus und nutzt somit die als "List-Mode" bekannten Rohdaten eines PET Systems. Für die Akquisition dieser Daten wurden eigens Methoden entwickelt, die es erstmals erlauben, diese Rohdaten von einem ECAT Exact HR+ PET Scanner innerhalb eines akzeptablen Zeitraumes zu erhalten. Des Weiteren wurden Methoden für die klinische Akquisition der Bewegungsdaten sowie für die automatische Auswertung dieser Daten entwickelt. Ebenfalls Teil der Arbeit waren die Entwicklung von Methoden zur Integration in die klinische Routine (z.B. graphische Nutzeroberflächen). Nach der Entwicklung, Optimierung und Integration der event-basierten Bewegungskorrektur für die klinische Nutzung wurden Analysen anhand von Beispieldatensätzen vorgenommen. Es zeigten sich bei der Auswertung sowohl der qualitativen als auch der quantitativen Effekte deutliche Änderungen. In qualitativer Hinsicht wurden Bildartefakte eliminiert; bei der quantitativen Auswertung einer FDOPA Messung zeigte sich eine revelante Änderung der R0k3 Einstromraten eines irreversiblen Zweikompartment-Modells mit Referenzgewebe. Es konnte somit gezeigt werden, dass eine Integration einer Bewegungskorrektur unter Zuhilfenahme der Rohdaten eines PET Systems sowie unter Nutzung eines externen Verfolgungssystems nicht nur sinnvoll und in der klinischen Routine machbar ist, sondern auch zu maßgeblichen qualitativen und quantitativen Verbesserungen in der PET-Bildgebung beitragen kann.
70

COMPARISON OF WRIST VELOCITY MEASUREMENT METHODS: IMU, GONIOMETER AND OPTICAL MOTION CAPTURE SYSTEM / JÄMFÖRELSE AV HANDLEDSMÄTNING METODER: IMU, GONIOMETER OCH OPTISKT RÖRELSEFÅNGNINGSSYSTEM

Manivasagam, Karnica January 2020 (has links)
Repetitive tasks, awkward hand/wrist postures and forceful exertions are known risk factors for work-related musculoskeletal disorders (WMSDs) of the hand and wrist. WMSD is a major cause of long work absence, productivity loss, loss in wages and individual suffering. Currently available assessment methods of the hand/wrist motion have the limitations of being inaccurate, e.g. when using self-reports or observations, or expensive and resource-demanding for following analyses, e.g. when using the electrogoniometers. Therefore, there is a need for a risk assessment method that is easy-to-use and can be applied by both researchers and practitioners for measuring wrist angular velocity during an 8-hour working day. Wearable Inertial Measurement Units (IMU) in combination with mobile phone applications provide the possibility for such a method. In order to apply the IMU in the field for assessing the wrist velocity of different work tasks, the accuracy of the method need to be examined. Therefore, this laboratory experiment was conducted to compare a new IMU-based method with the traditional goniometer and standard optical motion capture system. The laboratory experiment was performed on twelve participants. Three standard hand movements, including hand/wrist motion of Flexion-extension (FE), Deviation, and Pronationsupination (PS) at 30, 60, 90 beat-per-minute (bpm), and three simulated work tasks were performed. The angular velocity of the three methods at 50th and 90th percentile were calculated and compared. The mean absolute error and correlation coefficient were analysed for comparing the methods. Increase in error was observed with increase in speed/bpm during the standard hand movements. For standard hand movements, comparison between IMUbyaxis and Goniometer had the smallest difference and highest correlation coefficient. For simulated work tasks, the difference between goniometer and optical system was the smallest. However, for simulated work tasks, the differences between the compared methods were in general much larger than the standard hand movements. The IMU-based method is seen to have potential when compared with the traditional measurement methods. Still, it needs further improvement to be used for risk assessment in the field. / Upprepade uppgifter, besvärliga hand- / handledsställningar och kraftfulla ansträngningar är kända riskfaktorer för arbetsrelaterade muskuloskeletala störningar (WMSD) i hand och handled. WMSD är en viktig orsak till lång frånvaro, produktivitetsförlust, löneförlust och individuellt lidande. För närvarande tillgängliga bedömningsmetoder för hand / handledsrörelser har begränsningarna att vara felaktiga, t.ex. när du använder självrapporter eller observationer, eller dyra och resurskrävande för följande analyser, t.ex. när du använder elektrogoniometrarna. Därför finns det ett behov av en riskbedömningsmetod som är enkel att använda och som kan användas av både forskare och utövare för att mäta handledens vinkelhastighet under en 8-timmars arbetsdag. Wearable Inertial Measuring Units (IMU) i kombination med mobiltelefonapplikationer ger möjlighet till en sådan metod. För att kunna använda IMU i fältet för att bedöma handledens hastighet för olika arbetsuppgifter måste metodens noggrannhet undersökas. Därför genomfördes detta laboratorieexperiment för att jämföra en ny IMU-baserad metod med den traditionella goniometern och det vanliga optiska rörelsefångningssystemet. Laboratorieexperimentet utfördes på tolv deltagare. Tre standardhandrörelser, inklusive hand / handledsrörelse av Flexion-extension (FE), Deviation och Pronation-supination (PS) vid 30, 60, 90 beat-per-minut (bpm) och tre simulerade arbetsuppgifter utfördes. Vinkelhastigheten för de tre metoderna vid 50: e och 90: e percentilen beräknades och jämfördes. Det genomsnittliga absoluta felet och korrelationskoefficienten analyserades för att jämföra metoderna. Ökning av fel observerades med ökning av hastighet/bpm under standardhandrörelserna. För standardhandrörelser hade jämförelsen mellan IMUbyaxis och Goniometer den minsta skillnaden och högsta korrelationskoefficienten. För simulerade arbetsuppgifter var skillnaden mellan goniometer och optiskt system den minsta. För simulerade arbetsuppgifter var dock skillnaderna mellan de jämförda metoderna i allmänhet mycket större än de vanliga handrörelserna. Den IMUbaserade metoden anses ha potential jämfört med traditionella mätmetoder. Ändå behöver det förbättras för att kunna användas för riskbedömning på fältet.

Page generated in 0.1822 seconds