• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 5
  • 5
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 59
  • 59
  • 59
  • 15
  • 13
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Supervision en ligne de propriétés temporelles dans les systèmes distribués temps-réel / Online monitoring of temporal properties in distributed real-time system

Baldellon, Olivier 07 November 2014 (has links)
Les systèmes actuels deviennent chaque jour de plus en plus complexe; à la distribution s’ajoutent les contraintes temps réel. Les méthodes classiques en charge de garantir la sûreté de fonctionnement, comme le test, l’injection de fautes ou les méthodes formelles ne sont plus suffisantes à elles seules. Afin de pouvoir traiter les éventuelles erreurs lors de leur apparition dans un système distribué donné, nous désirons mettre en place un programme, surveillant ce système, capable de lancer une alerte lorsque ce dernier s’éloigne de ses spécifications ; un tel programme est appelé superviseur (ou moniteur). Le fonctionnement d’un superviseur consiste simplement à interpréter un ensemble d’informations provenant du système sous forme de message, que l’on qualifiera d’évènement, et d’en déduire un diagnostic. L’objectif de cette thèse est de mettre un place un superviseur distribué permettant de vérifier en temps réel des propriétés temporelles. En particulier nous souhaitons que notre moniteur soit capable de vérifier un maximum de propriétés avec un minimum d’information. Ainsi notre outil est spécialement conçu pour fonctionner parfaitement même si l’observation est imparfaite, c’est-à-dire, même si certains évènements arrivent en retard ou s’ils ne sont jamais reçus. Nous avons de plus cherché à atteindre cet objectif de manière distribuée pour des raisons évidentes de performance et de tolérance aux fautes. Nous avons ainsi proposé un protocole distribuable fondé sur l’exécution répartie d’un réseau de Petri temporisé. Pour vérifier la faisabilité et l’efficacité de notre approche, nous avons mis en place une implémentation appelée Minotor qui s’est révélée avoir de très bonnes performances. Enfin, pour montrer l’expressivité du formalisme utilisé pour exprimer les spécifications que l’on désire vérifier, nous avons détaillé un ensemble de propriétés sous forme de réseaux de Petri à double sémantique introduite dans cette thèse (l’ensemble des transitions étant partitionné en deux catégories de transitions, chacune de ces parties ayant sa propre sémantique). / Current systems are becoming every day more and more complex, being both distributed and real-timed. Conventional methods responsible for guaranteeing dependability, such as testing, fault injection or formal methods are no longer sufficient. In order to process any errors when they appear in a given distributed system, we want to implement a software monitoring it and capable of launching an alert when the system does not respect anymore its specification. Such a program is called monitor. A monitor interpret information received from the system as messages (these messages are called events) and propose a diagnosis. The objective of this thesis is to set in place a monitor for a distributed real-time verification of temporal properties. In particular we want our monitor to be able to check up a maximum of properties with a minimum of information. Thus, our tools are designed to work perfectly even if the observation is imperfect, that is to say, even if some events are late or never received. We have also managed to achieve this goal through a highly distributed protocol. To verify the feasibility and effectiveness of our approach, we have established an implementation called Minotor who was found to have very good performance. Finally, we detailed a set of properties, expressed in our formalism, to show it’s expressiveness.
22

A novel method for the Approximation of risk of Blackout in operational conditions / Une nouvelle méthode pour le rapprochement des risques de "Blackout" dans des conditions opérationnelles

Urrego Agudelo, Lilliam 04 November 2016 (has links)
L'industrie de l'électricité peut être caractérisée par plusieurs risques: la réglementation, la capacité, l'erreur humaine, etc. L'un des aspects les plus remarquables, en raison de leur impact, est lié à ne pas répondre à la demande (DNS).Pour éviter les défaillances en cascade, des critères déterministes comme la N-1 ont été appliquées, ce qui permet d'éviter la défaillance initial. Après une défaillance en cascade, des efforts considérables doivent être faits pour analyser les défauts afin de minimiser la possibilité d'un événement similaire.En dépit de tous ces efforts, des blackouts peuvent encore se produire. En effet, il est un défi, en raison du grand nombre d'interactions possibles et de leur diversité et complexité, pour obtenir une bonne prédiction d'une situation.Dans notre travail, une nouvelle méthodologie est proposée pour estimer le risque de blackout en utilisant des modèles de systèmes complexes. Cette approche est basée sur l'utilisation de variables qui peuvent être des précurseurs d'un événement DNS. Il est basé sur l'étude de la dépendance ou de corrélation entre les variables impliquées dans le risque de blackout, et la caractéristique d’auto-organisation critique (SOC) des systèmes complexes.La VaR est calculé en utilisant les données du système colombien et le coût du rationnement, y compris les variables économiques dans les variables techniques. Traditionnellement le risque augmente avec la racine carrée du temps, mais avec des séries de données que présente un comportement complexe, le taux de croissance est plus élevé.Une fois que les conditions de SOC sont déterminées, un Model de Flux de Puissance Statistique SPFM a été exécuté pour simuler le comportement du système et de ses variables pour les performances du système électrique. Les simulations ont été comparées aux résultats du comportement de fonctionnement réel du système de puissance électrique.Le flux de puissance DC est un modèle simplifié, ce qui représente le phénomène complexe de façon simple, néglige cependant certains aspects des événements de fonctionnement du système qui peut se produire dans les blackouts. La représentation des défaillances en cascade et de l'évolution du réseau électrique dans un modèle simple, permet l'analyse des relations temporaires dans l'exploitation des réseaux électriques, en plus de l'interaction entre la fiabilité à court terme et à long terme (avec un réseau d'amélioration). Cette méthodologie est axée sur la planification opérationnelle du lendemain (jour d'avance sur le marché), mais il peut être appliqué à d'autres échelles de temps.Les résultats montrent que le comportement complexe avec une loi de puissance et l'indice de Hurst est supérieur à 0,5. Les simulations basées sur notre modèle ont le même comportement que le comportement réel du système.En utilisant la théorie de la complexité, les conditions SOC doivent être mis en place pour le marché analysé du lendemain. Ensuite, une simulation inverse est exécutée, où le point final de la simulation est la situation actuelle du système, et permet au système d'évoluer et de répondre aux conditions requises par la caractéristique d’auto-organisation critique en un point de fonctionnement souhaité.Après avoir simulé le critère de fiabilité utilisé dans l’exploitation du système électrique pour les défaillances en cascade, ils sont validés par des défaillances historiques obtenues à partir du système électrique. Ces résultats, permettent l'identification des lignes avec la plus grande probabilité de défaillance, la séquence des événements associés, et quelles simulations d'actions d’exploitation ou d'expansion, peuvent réduire le risque de défaillance du réseau de transmission.Les possibles avantages attendus pour le réseau électrique sont l'évaluation appropriée du risque du réseau, l'augmentation de la fiabilité du système, et un progrès de la planification du risque du lendemain et connaissance de la situation / The electricity industry can be characterized by several risks: regulatory, adequacy, human error, etc. One of the most outstanding aspects, because of their impact, is related to not supply demand (DNS).To prevent cascading failures, particularly in reliability studies, determinist criteria were applied, such as N-1, which allows to avoid the initial event of failure in the planning and operation of the system. In general, analysis tools for these preventive actions are applied separately for the planning and for the system operation of an electric power. After a cascading failure, considerable efforts must be done to analyze faults to minimize the possibility of a similar event.In spite of all these efforts, blackouts or large cascading failures still happen, although events are considered to be rare due to the efforts of the industry. Indeed, it is a challenge from the point of view of analysis and simulation, due to the large number of possible interactions and their diversity and complexity, to obtain a good prediction of a situation.In our work, a new methodology is proposed to estimate the blackout risk using complex systems models. This approach is based on the use of variables that can be precursors of a DNS event. In other terms, it is based on the study of the dependence or correlation between the variables involved in the blackout risk, and the self organized critically (SOC) property of complex systems.VaR is calculate using the data from the Colombian system and the cost of rationing, for estimate the cost of blackout including economic variables into the technical variables. In addition, traditionally the risk grows with the root square of the time, but with data series than has complex behavior, the rate of growing is higher.Once the SOC conditions are determined, a Statistical Power Flow Model SPFM was executed to simulate the behavior of the system and its variables for the performance of the electrical system. Simulations results were compared to the real operation behavior of the electrical power system.The DC power flow is a simplified model, which represents the complex phenomenon in a simple way, however neglects some aspects of the events of operation of the system that can happen in blackouts. The representation of cascading failures and evolution of the network in a simple model allows the analysis of the temporary relations in the operation of electrical networks, besides the interaction between reliability of short-term and long-term (with improvements network).. This methodology is focused on the operational planning of the following day (day ahead market), but it can be applied to other time scales.The results show that the complex behavior with a power law and the Hurst index is greater than 0.5. The simulations based on our model have the same behavior as the real behavior of the system.For using the complexity theory, the SOC conditions must be established for the day ahead analyzed market. Then an inverse simulation is executed, where the endpoint of the simulation is the current situation of the system, and allows the system to evolve and meet the requisites of criticality auto-organized in a desired point of operation.After simulating the criterion of reliability used in the operation of the electrical system for cascading failures, they are validated by historical failures obtained from the electrical system. These results, allow the identification of lines with the biggest probability to fail, the sequence of associate events, and what simulations of actions of operation or expansion, can reduce the risk of failures of the transmission network.The possible advantage expected for the electrical network are the appropriate evaluation of the risk of the network, the increase the reliability of the system (probabilistic analysis), and a progress of the planning of the risk of the day ahead (holistic analysis) and situational awareness
23

Group-EDF: A New Approach and an Efficient Non-Preemptive Algorithm for Soft Real-Time Systems

Li, Wenming 08 1900 (has links)
Hard real-time systems in robotics, space and military missions, and control devices are specified with stringent and critical time constraints. On the other hand, soft real-time applications arising from multimedia, telecommunications, Internet web services, and games are specified with more lenient constraints. Real-time systems can also be distinguished in terms of their implementation into preemptive and non-preemptive systems. In preemptive systems, tasks are often preempted by higher priority tasks. Non-preemptive systems are gaining interest for implementing soft-real applications on multithreaded platforms. In this dissertation, I propose a new algorithm that uses a two-level scheduling strategy for scheduling non-preemptive soft real-time tasks. Our goal is to improve the success ratios of the well-known earliest deadline first (EDF) approach when the load on the system is very high and to improve the overall performance in both underloaded and overloaded conditions. Our approach, known as group-EDF (gEDF), is based on dynamic grouping of tasks with deadlines that are very close to each other, and using a shortest job first (SJF) technique to schedule tasks within the group. I believe that grouping tasks dynamically with similar deadlines and utilizing secondary criteria, such as minimizing the total execution time can lead to new and more efficient real-time scheduling algorithms. I present results comparing gEDF with other real-time algorithms including, EDF, best-effort, and guarantee scheme, by using randomly generated tasks with varying execution times, release times, deadlines and tolerances to missing deadlines, under varying workloads. Furthermore, I implemented the gEDF algorithm in the Linux kernel and evaluated gEDF for scheduling real applications.
24

Reliabilibity Aware Thermal Management of Real-time Multi-core Systems

Xu, Shikang 18 March 2015 (has links)
Continued scaling of CMOS technology has led to increasing working temperature of VLSI circuits. High temperature brings a greater probability of permanent errors (failure) in VLSI circuits, which is a critical threat for real-time systems. As the multi-core architecture is gaining in popularity, this research proposes an adaptive workload assignment approach for multi-core real-time systems to balance thermal stress among cores. While previously developed scheduling algorithms use temperature as the criterion, the proposed algorithm uses reliability of each core in the system to dynamically assign tasks to cores. The simulation results show that the proposed algorithm gains as large as 10% benefit in system reliability compared with commonly used static assignment while algorithms using temperature as criterion gain 4%. The reliability difference between cores, which indicates the imbalance of thermal stress on each core, is as large as 25 times smaller when proposed algorithm is applied.
25

An Analysis to Identify the Factors thatImpact the Performance of Real-TimeSoftware Systems : A Systematic mapping study and Case Study

Bejawada, Sravani January 2020 (has links)
Background: Many organizations lack the time, resources, or experience to derive a myriad of input factors impacting performance. Instead, developers use the trial and error approach to analyze the performance. The trial and error approach is difficult and time taking process when working with complex systems. Many factors impact the performance of real-time software systems. But the most important factors which impact the performance of the real-time software systems are identified in this research paper. Black box (performance) testing focuses solely on the outputs generated in response to the factors supplied while neglecting the internal components of the software. Objectives: The objective of this research is to identify the most important factors which impact the performance of real-time software systems. Identifying these factors helps developers in improving the performance of real-time software systems. The context in which the objective is achieved is an Online charging system, which is one of the software in Business support systems. In context, real-time systems, the traffic changes in a fraction of seconds, so it is important measuring the performance of these systems. Latency is also one of the major factors which impact the performance of any real-time system. Additionally, another motivation for this research is to explore a few other major factors which impact the performance. Methods: Systematic Mapping Study (SMS) and case study were conducted to identify the important factors which impact the performance of real-time software systems. Both the data collection methods, a survey and interviews were designed and executed to collect the qualitative data. Survey and interviews were conducted among 12 experienced experts who have prior knowledge regarding the performance of the system to know the most important factors that impact the performance of the online charging system. The qualitative data collected from the case study are categorized by using thematic analysis. From the logs, i.e., quantitative data collected from industry was analyzed by using random forest feature importance algorithm to identify the factors which have the highest impact on the performance of the online charging system. Results: Systematic mapping study was conducted in the literature to review the existing literature; 22 factors are identified from 21 articles. 12 new factors are identified from the survey, which was previously not identified in the literature study. From the available quantitative data based on the performance impact of the factors on the system, the factors are identified. Knowing these factors helps the developers to resolve the performance issues by allocating more number of virtual machines, thereby the performance of the system can be improved and also the behaviour of the system can be known. All these results are purely based on the expert's opinions. Conclusions: This study identifies the most important factors that impact the performance of real-time software systems. The identified factors are mostly technical factors such as CPU utilization, Memory, Latency, etc.. The objectives are addressed by selecting suitable research methods.
26

Minimising shared resource contention when scheduling real-time applications on multi-core architectures / Minimiser l’impact des communications lors de l’ordonnancement d’application temps-réels sur des architectures multi-cœurs

Rouxel, Benjamin 19 December 2018 (has links)
Les architectures multi-cœurs utilisant des mémoire bloc-notes sont des architectures attrayantes pour l'exécution des applications embarquées temps-réel, car elles offrent une grande capacité de calcul. Cependant, les systèmes temps-réel nécessitent de satisfaire des contraintes temporelles, ce qui peut être compliqué sur ce type d'architectures à cause notamment des ressources matérielles physiquement partagées entre les cœurs. Plus précisément, les scénarios de pire cas de partage du bus de communication entre les cœurs et la mémoire externe sont trop pessimistes. Cette thèse propose des stratégies pour réduire ce pessimisme lors de l'ordonnancement d'applications sur des architectures multi-cœurs. Tout d'abord, la précision du pire cas des coûts de communication est accrue grâce aux informations disponibles sur l'application et l'état de l'ordonnancement en cours. Ensuite, les capacités de parallélisation du matériel sont exploitées afin de superposer les calculs et les communications. De plus, les possibilités de superposition sont accrues par le morcellement de ces communications. / Multi-core architectures using scratch pad memories are very attractive to execute embedded time-critical applications, because they offer a large computational power. However, ensuring that timing constraints are met on such platforms is challenging, because some hardware resources are shared between cores. When targeting the bus connecting cores and external memory, worst-case sharing scenarios are too pessimistic. This thesis propose strategies to reduce this pessimism. These strategies offer to both improve the accuracy of worst-case communication costs, and to exploit hardware parallel capacities by overlapping computations and communications. Moreover, fragmenting the latter allow to increase overlapping possibilities.
27

Enriching Enea OSE for Better Predictability Support

Ul Mustafa, Naveed January 2011 (has links)
A real-time application is designed as a set of tasks with specific timing attributes and constraints. These tasks can be categorized as periodic, sporadic or aperiodic, based on the timing attributes that are specified for them which in turn define their runtime behaviors. To ensure correct execution and behavior of the task set at runtime, the scheduler of the underlying operating system should take into account the type of each task (i.e.,  periodic, sporadic, aperiodic). This is important so that the scheduler can schedule the task set in a predictable way and be able to allocate CPU time to each task appropriately in order for them to achieve their timing constraints. ENEA OSE is a real-time operating system with fixed priority preemptive scheduling policy which is used heavily in embedded systems, such as telecommunication systems developed by Ericsson. While OSE allows for specification of priority levels for tasks and schedules them accordingly, it can not distinguish between different types of tasks. This thesis work investigates mechanisms to build a scheduler on top of OSE, which can identify three types of real-time tasks and schedule them in a more predictable way. The scheduler can also monitor behavior of task set at run-time and invoke violation handlers if time constraints of a task are violated. The scheduler is implemented on OSE5.5 soft kernel. It identifies periodic, aperiodic and sporadic tasks. Sporadic and aperiodic tasks can be interrupt driven or program driven. The scheduler implements EDF and RMS as scheduling policy of periodic tasks. Sporadic and aperiodic tasks can be scheduled using polling server or background scheme. Schedules generated by the scheduler  deviate from expected timing behavior due to scheduling overhead. Approaches to reduce deviation are suggested as future extension of thesis work. Usability of the scheduler can be increased by extending the scheduler to support other scheduling algorithm in addition to RMS and EDF. / CHESS
28

A Study Of Genetic Representation Schemes For Scheduling Soft Real-Time Systems

Bugde, Amit 13 May 2006 (has links)
This research presents a hybrid algorithm that combines List Scheduling (LS) with a Genetic Algorithm (GA) for constructing non-preemptive schedules for soft real-time parallel applications represented as directed acyclic graphs (DAGs). The execution time requirements of the applications' tasks are assumed to be stochastic and are represented as probability distribution functions. The performance in terms of schedule lengths for three different genetic representation schemes are evaluated and compared for a number of different DAGs. The approaches presented in this research produce shorter schedules than HLFET, a popular LS approach for all of the sample problems. Of the three genetic representation schemes investigated, PosCT, the technique that allows the GA to learn which tasks to delay in order to allow other tasks to complete produced the shortest schedules for a majority of the sample DAGs.
29

Synchronized Communication Network for Real-Time Distributed Control Systems in Modular Power Converters

Rong, Yu 08 November 2022 (has links)
Emerging large-scale modular power converters are pursuing high-performance distributed control systems. As opposed to the centralized control architecture, the distributed control architecture features shared computational burdens, pulse-width modulation (PWM) latency compensation, simplified fiber-optic cable connection, redundant data routes, and greatly enhanced local control capabilities. Modular multilevel converters (MMCs) with conventional control are subjected to large capacitor voltage ripples, especially at low-line frequencies. It is proved that with appropriate arm current shaping in the timescale of a switching period, referred as the switching-cycle control (SCC), such line-frequency dependence can be eliminated and MMCs are enabled to work even in dc-dc mode. Yet the SCC demands multiple times of arm current alternations in one switching period. To achieve the high-bandwidth current regulation, hybrid modulation approach incorporating both the carrier-based modulation and the peak-current-mode (PCM) modulation is adopted. The combined digital and analog control and the extreme time-sensitive nature together pose great challenges on the practical implementation that the existing distributed control systems cannot cope with. This dissertation aims to develop an optimized distributed control system for SCC implementation. The critical analog PCM modulation is enabled by the intelligent gate driver with integrated rogowski coil and field programmable gate array (FPGA). A novel distributed control architecture is proposed accordingly for SCC applications where the hybrid modulation function is shifted to the gate driver. The proposed distributed control solution is verified in the SCC-based converter operations. Accompanied by the growing availability of medium-voltage silicon carbide (SiC) devices, fast-switching-enabled novel control schemes raise a high synchronization requirement for the communication network. Power electronics system network (PESNet) 3.0 is a proposed next-generation communication network designed and optimized for a distributed control system. This dissertation presents the development of PESNet 3.0 with a sub-nanosecond synchronization error (SE) and a gigabits-per-second data rate dedicated for large-scale high-frequency modular power converters. The White Rabbit Network technology, originally developed for the Large Hadron Collider accelerator chain at the European Organization for Nuclear Research (CERN), has been embedded in PESNet 3.0 and tailored to be suited for distributed power conversion systems. A simplified inter-node phase-locked loop (N2N-PLL) has been developed. Subsequently, stability analysis of the N2N-PLL is carried out with closed-loop transfer function measurement using a digital perturbation injection method. The experimental validation of the PESNet 3.0 is demonstrated at the controller level and converter level, respectively. The latter is on a 10 kV SiC-MOSFET-based modular converter prototype, verifying ±0.5 ns SE at 5 Gbps data rate for a new control scheme. The communication network has an impact on the converter control and operation. The synchronicity of the controllers has an influence on the converter harmonics and safe operation. A large synchronization error can lead to the malfunction of the converter operation. The communication latency poses a challenge to the converter control frequency and bandwidth. With the increased scale of the modular converter and control frequency, the low-latency requirement of communication network becomes more stringent. / Doctor of Philosophy / Emerging silicon carbide (SiC) power devices with 10 times higher switching frequencies than conventional Si devices have enabled high-frequency high-density medium-voltage converters. In the meantime, the power electronics building block (PEBB) concept has continually benefited the manufacturing and maintenance of modular power converters. This philosophy can be further extended from power stages to control systems, and the latter become more distributed with greatly enhanced local control capabilities. In the distributed control and communication system, each PEBB is equipped with a digital controller. In this dissertation, a real-time distributed control architecture is designed to take the advantage of the powerful processing capability from all digital control units, achieving a minimized digital delay for the control system. In addition, pulse-width modulation (PWM) signals are modulated in each PEBB controller based on its own clock. Due to the uncontrollable latency among different PEBB controllers, the synchronicity becomes a critical issue. It is necessary to ensure the synchronous operation to follow the desired modulation scheme. This dissertation presents a synchronized communication network design with sub-ns synchronization error and gigabits-per-second data rate. Finally, the impact of the communication network on the converter operation is analyzed in terms of the synchronicity, the communication latency and fault redundancy.
30

MR-guided thermotherapies of mobile organs : advances in real time correction of motion and MR-thermometry / Thermothérapies guidées par IRM sur organes mobiles : avancées sur la correction en temps réel du mouvement et de la thermométrie

Roujol, Sébastien 25 May 2011 (has links)
L'ablation des tissus par hyperthermie locale guidée par IRM est une technique prometteuse pour le traitement du cancer et des arythmies cardiaques. L'IRM permet d'extraire en temps réel des informations anatomiques et thermiques des tissus. Cette thèse a pour objectif d'améliorer et d'étendre la méthodologie existante pour des interventions sur des organes mobiles comme le rein, le foie et le coeur. La première partie a été consacrée à l'introduction de l'imagerie rapide (jusqu'à 10-15 Hz) pour le guidage de l'intervention par IRM en temps réel. L'utilisation de cartes graphiques (GPGPU) a permis une accélération des calculs afin de satisfaire la contrainte de temps réel. Une précision, de l'ordre de 1°C dans les organes abdominaux et de 2-3°C dans le coeur, a été obtenue. Basé sur ces avancées, de nouveaux développements méthodologiques ont été proposés dans une seconde partie de cette thèse. L'estimation du mouvement basée sur une approche variationnelle a été améliorée pour gérer la présence de structures non persistantes et de fortes variations d'intensité dans les images. Un critère pour évaluer la qualité du mouvement estimé a été proposé et utilisé pour auto-calibrer notre algorithme d'estimation du mouvement. La méthode de correction des artefacts de thermométrie liés au mouvement, jusqu'ici restreinte aux mouvements périodiques, a été étendue à la gestion de mouvements spontanés. Enfin, un nouveau filtre temporel a été développé pour la réduction du bruit sur les cartographies de température. La procédure interventionnelle apparaît maintenant suffisamment mature pour le traitement des organes abdominaux et pour le transfert vers la clinique. Concernant le traitement des arythmies cardiaques, les méthodes ont été évaluées sur des sujets sains et dans le ventricule gauche. Par conséquent, la faisabilité de l'intervention dans les oreillettes mais aussi en présence d'arythmie devra être abordée. / MR-guided thermal ablation is a promising technique for the treatment of cancer and atrial fibrillation. MRI provides both anatomical and temperature information. The objective of this thesis is to extend and improve existing techniques for such interventions in mobile organs such as the kidney, the liver and the heart. A first part of this work focuses on the use of fast MRI (up to 10-15 Hz) for guiding the intervention in real time. This study demonstrated the potential of GPGPU programming as a solution to guarantee the real time condition for both MR-reconstruction and MR-thermometry. A precision in the range of 1°C and 2-3°C was obtained in abdominal organs and in the heart, respectively. Based on these advances, new methodological developments have been carried out in a second part of this thesis. New variational approaches have proposed to address the problem of motion estimation in presence of structures appearing transient and high intensity variations in images. A novel quality criterion to assess the motion estimation is proposed and used to autocalibrate our motion estimation algorithm. The correction of motion related magnetic susceptibility variation was extended to treat the special case of spontaneous motion. Finally, a novel temporal filter is proposed to reduce the noise of MR-thermometry measurements while controlling the bias introduced by the filtering process. As a conclusion, all main obstacles for MR-guided HIFU-ablation of abdominal organs have been addressed in in-vivo and ex-vivo studies, therefore clinical studies will now be realized. However, although promising results have been obtained for MR-guided RF-ablation in the heart, its feasibility in the atrium and in presence of arrhythmia still remains to be investigated.

Page generated in 0.5209 seconds