• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance

Tian, Xiaoyu January 2016 (has links)
<p>X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].</p><p>Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.</p><p>As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization. </p><p>More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.</p><p>With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled. </p><p>Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4. </p><p>With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes. </p><p>Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization. </p><p>Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.</p> / Dissertation
2

Optimering av PTAH-färgning för visualisering av ischemiska förändringar i myokardiet / Optimization of PTAH staining protocol for myocardial infarction diagnosis

Persson, Jenny January 2023 (has links)
Akut hjärtinfarkt är globalt sett en vanlig dödsorsak. Hjärtinfarkter behöver inte ge tydliga symptom och dödsorsaken kan därmed vara okänd fram till en klinisk obduktion och att histopatolgiska studier av hjärtmuskelvävnad genomförs. Fosforwolframsyra-hematoxylin (PTAH) är en färgningsmetod som kan användas vid visualisering av hjärtinfarkt genom att färga myokardceller, fibrin, kontraktionsband och cellkärnor blå medan kollagen färgas rosa till rödbrunt. Syftet med examensarbetet var att optimera PTAH-färgningsprotokoll för att underlätta diagnosticering av hjärtinfarkt. Tre olika färgningsprotokoll, två med PTAH med Mallory Bleach (#1 och #2) och ett med PTAH med refixering i Bouins lösning (#3), jämfördes vid inkubering över natt i rumstemperatur samt 3-4 h i värmeskåp vid 56 ºC. Vävnader som färgades in var från paraffininbäddade klossar vilka tillhörde utsvarade fall med konstaterad hjärtinfarkt. Dessa snittades och värmdes fast på oladdade samt laddade objektglas. Efter färgning graderades infärgning efter kategorierna vävnadsdifferentiering och cellkomponenter med en poängskala från 0, ej bedömbar, till 3, optimal. Det protokoll med högst poäng optimerades för att hitta bästa inkuberingstid i PTAH-lösning, därefter implementerades protokollet genom att bekräfta korrekt infärgning vid upprepade infärgningar av hjärtmuskelvävnad. Protokoll #3 visade på högst poäng efter infärgning vid jämförelsen med protokoll #1 och #2 och under optimering framgick inkubering under 4 h i värmeskåp vid 56 ºC som mest gynnsam, då denna inkuberingstid även visade på mindre känslighet vid dehydrering. Slutsatsen blev att PTAH-färgning efter protokoll #3 med refixering i Bouins lösning med 4 h i värmeskåp gav bäst resultat, men färgnyansen varierar med tid mellan dödsfallet och obduktionen. / Acute myocardial infarction (MI) is a common cause of death globally. MI without symptoms or with atypical symptoms is usually first detected at clinical autopsy with histological analyses of the myocardium. Phosphtungstic acid haematoxylin (PTAH) is a method to visualize MI by staining myocytes, fibrin, contraction bands and nuclei blue while collagen stains reddish brown. The aim of this degree project is to optimize PTAH staining protocol for MI diagnosis. Three different staining protocols, two protocols with PTAH staining and Mallory Bleach solution and one protocol with PTAH staining with refixation in Bouin’s solution (#1-3), where compared along with incubation overnight in room temperature and at different hours in heat (56 ºC). Paraffin embedded tissues from myocardium from different autopsy cases diagnosed with MI were cut before mounting and heating on non-charged as well as charged slides. After staining, results were evaluated using scores 0-3 for two parameters; differences in tissues and cell components. Highest evaluated protocol where optimized to find the ultimate incubation in PTAH staining solution. For implementations, additional myocardial tissues from other cases were stained to confirm repeated results. Protocol #3 was evaluated higher in comparison with #1-2 and optimized to 4 h incubation in PTAH-solution in heating with staining results less sensitive to dehydration. In conclusion, staining protocol #3 with 4 h heating was optimal, however, vulnerable to prolonged morgue storage time losing the insensitivity of colour.
3

Study of adaptation mechanisms of the wireless sensor nodes to the context for ultra-low power consumption / Etude des mécanismes d'adaptation des noeuds de capteurs sans fil dans le contexte de très faible consommation d'énergie

Liendo sanchez, Andreina 25 October 2018 (has links)
L'Internet des objets (IoT) est annoncé comme la prochaine grande révolution technologique où des milliards d'appareils s'interconnecteront en utilisant les technologies d’Internet et permettront aux utilisateurs d'interagir avec le monde physique, permettant Smart Home, Smart Cities, tout intelligent. Les réseaux de capteurs sans fil (WSN) sont cruciales pour tourner la vision de l'IoT dans une réalité, mais pour que cela devienne réalité, beaucoup de ces dispositifs doivent être autonomes en énergie. Par conséquent, un défi majeur est de fournir une durée de vie de plusieurs années tout en alimentant les nœuds par batteries ou en utilisant l'énergie récoltée. Bluetooth Low Energy (BLE) a montré une efficacité énergétique et une robustesse supérieures à celles d'autres protocoles WSN bien connus, ce qui fait BLE un candidat solide pour la mise en œuvre dans des scénarios IoT. En outre, BLE est présent dans presque tous les smartphones, ce qui en fait une télécommande universelle omniprésente pour les maisons intelligentes, les bâtiments ou les villes. Néanmoins, l'amélioration de la performance BLE pour les cas typiques d'utilisation de l'IoT, où la durée de vie de la batterie de nombreuses années, est toujours nécessaire.Dans ce travail, nous avons évalué les performances de BLE en termes de latence et de consommation d'énergie sur la base de modèles analytiques afin d'optimiser ses performances et d'obtenir son niveau maximal d'efficacité énergétique sans modification de la spécification en premier lieu. À cette fin, nous avons proposé une classification des scénarios ainsi que des modes de fonctionnement pour chaque scénario. L'efficacité énergétique est atteinte pour chaque mode de fonctionnement en optimisant les paramètres qui sont affectés aux nœuds BLE pendant la phase de découverte du voisin. Cette optimisation des paramètres a été réalisée à partir d'un modèle énergétique extrait de l'état de la technique. Le modèle, à son tour, a été optimisé pour obtenir une latence et une consommation d'énergie quel que soit le comportement des nœuds à différents niveaux: application et communication. Puisqu'un nœud peut être le périphérique central à un niveau, alors qu'il peut être le périphérique à l'autre niveau en même temps, ce qui affecte la performance finale des nœuds.En outre, un nouveau modèle d'estimation de la durée de vie de la batterie a été présenté pour montrer l'impact réel de l'optimisation de la consommation énergétique sur la durée de vie des nœuds, de façon rapide (en termes de temps de simulation) et réaliste (en tenant compte des données empiriques). Les résultats de performance ont été obtenus dans notre simulateur Matlab basé sur le paradigme OOP, à travers l'utilisation de plusieurs cas de test IoT. En outre, le modèle de latence utilisé pour notre étude a été validé expérimentalement ainsi que l'optimisation des paramètres proposée, montrant une grande précision.Après avoir obtenu les meilleures performances possibles de BLE sans modification de la spécification, nous avons évalué les performances du protocole en implémentant le concept de Wake-Up radio (WuR), qui est un récepteur d’ultra-faible consommation et qui est en charge de détecter le canal de communication, en attente d'un signal adressé au nœud, puis réveiller la radio principale. Ainsi, la radio principale, qui consomme beaucoup plus d'énergie, peut rester en mode veille pendant de longues périodes et passer en mode actif uniquement pour la réception de paquets, économisant ainsi une quantité d'énergie considérable. Nous avons démontré que la durée de vie de BLE peut être significativement augmentée en implémentant une WuR et nous proposons une modification du protocole afin de rendre ce protocole compatible avec un mode de fonctionnement qui inclut une WuR. Pour cela, nous avons étudié l'état de l'art de la WuR et évalué la durée de vie des périphériques BLE lorsqu'une WuR sélectionnée est implémentée du côté master. / The Internet of Things (IoT) is announced as the next big technological revolution where billions of devices will interconnect using Internet technologies and let users interact with the physical world, allowing Smart Home, Smart Cities, smart everything. Wireless Sensor Network (WSN) are crucial for turning the vision of IoT into a reality, but for this to come true, many of these devices need to be autonomous in energy. Hence, one major challenge is to provide multi-year lifetime while powered on batteries or using harvested energy. Bluetooth Low Energy (BLE) has shown higher energy efficiency and robustness than other well known WSN protocols, making it a strong candidate for implementation in IoT scenarios. Additionally, BLE is present in almost every smartphone, turning it into perfect ubiquitous remote control for smart homes, buildings or cities. Nevertheless, BLE performance improvement for typical IoT use cases, where battery lifetime should reach many years, is still necessary.In this work we evaluated BLE performance in terms of latency and energy consumption based on analytical models in order to optimize its performance and obtain its maximum level of energy efficiency without modification of the specification in a first place. For this purpose, we proposed a scenarios classification as well as modes of operation for each scenario. Energy efficiency is achieved for each mode of operation by optimizing the parameters that are assigned to the BLE nodes during the neighbor discovery phase. This optimization of the parameters was made based on an energy model extracted from the state of the art. The model, in turn, has been optimized to obtain latency and energy consumption regardless of the behavior of the nodes at different levels: application and communication. Since a node can be the central device at one level, while it can be the peripheral device at the other level at the same time, which affects the final performance of the nodes.In addition, a novel battery lifetime estimation model was presented to show the actual impact that energy consumption optimization have on nodes lifetime in a fast (in terms of simulation time) and realistic way (by taking into account empirical data). Performance results were obtained in our Matlab based simulator based on OOP paradigm, through the use of several IoT test cases. In addition, the latency model used for our investigation was experimentally validated as well as the proposed parameter optimization, showing a high accuracy.After obtaining the best performance possible of BLE without modification of the specification, we evaluated the protocol performance when implementing the concept of Wake-Up radio, which is an ultra low power receiver in charge on sensing the communication channel, waiting for a signal addressed to the node and then wake the main radio up. Thus, the main radio which consumes higher energy, can remain in sleep mode for long periods of time and switch to an active mode only for packet reception, therefore saving considerable amount of energy. We demonstrated that BLE lifetime can be significantly increased by implementing a Wake-Up radio and we propose a modification of the protocol in order to render this protocol compatible with an operating mode which includes a Wake-Up radio. For this, we studied the Wake-Up radio state of the art and evaluated BLE devices lifetime when a selected Wake-Up radio is implemented at the master side.

Page generated in 0.0998 seconds