Spelling suggestions: "subject:"asurgical gestures"" "subject:"cirurgical gestures""
1 |
Enhancing Surgical Gesture Recognition Using Bidirectional LSTM and Evolutionary Computation: A Machine Learning Approach to Improving Robotic-Assisted Surgery / BiLSTM and Evolutionary Computation for Surgical Gesture RecognitionZhang, Yifei January 2024 (has links)
The integration of artificial intelligence (AI) and machine learning in the medical field has led to significant advancements in surgical robotics, particularly in enhancing the precision and efficiency of surgical procedures. This thesis investigates the application of a single-layer bidirectional Long Short-Term Memory (BiLSTM) model to the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) dataset, aiming to improve the recognition and classification of surgical gestures. The BiLSTM model, with its capability to process data in both forward and backward directions, offers a comprehensive analysis of temporal sequences, capturing intricate patterns within surgical motion data. This research explores the potential of BiLSTM models to outperform traditional unidirectional models in the context of robotic surgery.
In addition to the core model development, this study employs evolutionary computation techniques for hyperparameter tuning, systematically searching for optimal configurations to enhance model performance. The evaluation metrics include training and validation loss, accuracy, confusion matrices, prediction time, and model size. The results demonstrate that the BiLSTM model with evolutionary hyperparameter tuning achieves superior performance in recognizing surgical gestures compared to standard LSTM models.
The findings of this thesis contribute to the broader field of surgical robotics and human-AI partnership by providing a robust method for accurate gesture recognition, which is crucial for assessing and training surgeons and advancing automated and assistive technologies in surgical procedures. The improved model performance underscores the importance of sophisticated hyperparameter optimization in developing high-performing deep learning models for complex sequential data analysis. / Thesis / Master of Applied Science (MASc) / Advancements in artificial intelligence (AI) are transforming medicine, particularly in robotic surgery. This thesis focuses on improving how robots recognize and classify surgeons' movements during operations. Using a special AI model called a bidirectional Long Short-Term Memory (BiLSTM) network, which looks at data both forwards and backwards, the study aims to better understand and predict surgical gestures.
By applying this model to a dataset of surgical tasks, specifically suturing, and optimizing its settings with advanced techniques, the research shows significant improvements in accuracy and efficiency over traditional methods. The enhanced model is not only more accurate but also smaller and faster.
These improvements can help train surgeons more effectively and advance robotic assistance in surgeries, leading to safer and more precise operations, ultimately benefiting both surgeons and patients.
|
2 |
Analyse, reconnaissance et réalisation des gestes pour l'entraînement en chirurgie laparoscopique robotisée / Gesture analysis, recognition and execution for surgical robotic trainingDespinoy, Fabien 14 December 2015 (has links)
L'intégration de systèmes robotiques au sein du bloc opératoire a modifié le déroulement de certaines interventions, laissant ainsi place à des pratiques favorisant le bénéfice médical rendu au patient en dépit des aspects conventionnels. Dans ce cadre, de récentes études de la Haute Autorité de Santé ont mis en avant les effets indésirables graves intervenant au cours des procédures chirurgicales robotisées. Ces erreurs, majoritairement dues aux compétences techniques du praticien, remettent ainsi en cause la formation et les techniques d'apprentissage pour la chirurgie robotisée. Bien que l'utilisation abondante de simulateurs facilite cet apprentissage au travers différents types d'entraînement, le retour fourni à l'opérateur reste succinct et ne lui permet pas de progresser dans de bonnes conditions. De ce fait, nous souhaitons améliorer les conditions d'entraînement en chirurgie laparoscopique robotisée. Les objectifs de cette thèse sont doubles. En premier lieu, ils visent le développement d'une méthode pour la segmentation et la reconnaissance des gestes chirurgicaux durant l'entraînement en se basant sur une approche non-supervisée. Utilisant les données cinématiques des instruments chirurgicaux, nous sommes capables de reconnaître les gestes réalisés par l'opérateur à hauteur de 82%. Cette méthode est alors une première étape pour l'évaluation de compétences basée sur la gestuelle et non sur l'ensemble de la tâche d'entraînement. D'autre part, nous souhaitons rendre l'entraînement en chirurgie robotisée plus accessible et moins coûteux. De ce fait, nous avons également étudié l'utilisation d'une nouvelle interface homme-machine sans contact pour la commande des robots chirurgicaux. Dans ces travaux, cette interface a été couplée au Raven-II, un robot de téléopération dédié à la recherche en robotique chirurgicale. Nous avons alors évalué les performances du système au travers différentes études, concluant ainsi à la possibilité de téléopérer un robot chirurgical avec ce type de dispositif. Il est donc envisageable d'utiliser ce type d'interface pour l'entraînement sur simulateur afin de réduire le coût de la formation, mais également d'améliorer l'accès et l'acquisition des compétences techniques spécifiques à la chirurgie robotisée. / Integration of robotic systems in the operating room changed the way that surgeries are performed. It modifies practices to improve medical benefits for the patient but also brought non-traditional aspects that can lead to serious undesirable effects. Recent studies from the French authorities for hygiene and medical care highlight that these undesirable effects mainly come from the surgeon's technical skills, which question surgical robotic training and teaching. To overcome this issue, surgical simulators help to train practitioner through different training tasks and provide feedback to the operator. However the feedback is partial and do not help the surgeon to understand gestural mistakes. Thus, we want to improve the surgical robotic training conditions. The objective of this work is twofold. First, we developed a new method for segmentation and recognition of surgical gestures during training sessions based on an unsupervised approach. From surgical tools kinematic data, we are able to achieve gesture recognition at 82%. This method is a first step to evaluate technical skills based on gestures and not on the global execution of the task as it is done nowadays. The second objective is to provide easier access to surgical training and make it cheaper. To do so, we studied a new contactless human-machine interface to control surgical robots. In this work, the interface is plugged to a Raven-II robot dedicated to surgical robotics research. Then, we evaluated performance of such system through multiple studies, concluding that this interface can be used to control surgical robots. In the end, one can consider to use this contactless interface for surgical training with a simulator. It can reduce the training cost and also improve the access for novice surgeons to technical skills training dedicated to surgical robotics.
|
Page generated in 0.0581 seconds