1 |
Examining the Impact of a Video Review Guide on Robotic Surgical Skill ImprovementSoliman, Mary Mansour 01 January 2024 (has links) (PDF)
Surgical education has the arduous task of providing effective and efficient methods of surgical skill acquisition and clinical judgment while staying abreast with the latest surgical technologies within an ever-changing field. Robotic surgery is one such technology. Many surgeons in practice today were either never taught or were not effectively taught robotic surgery during training, leaving them to navigate the robotic learning curve and reach mastery independently. This dissertation examines the impact of a video review guide on improving robotic surgical skills. Using Kolb’s Experiential Learning Theory as a framework, the literature review argues that video review can be used as a catalyst for reflection, which can deepen learning and improve self-assessment. Reflection, however, is not an innate skill but must be explicitly taught or guided. The researcher argues that a written video review guide can help novice surgeons develop reflective practice, resulting in improved surgical skills and a shorter robotic learning curve. A between-group quasi-random experiment was conducted to test this theory. The participants performed a pre-test technical simulation, conducted an independent video review, and then repeated the same simulation as a post-test. The intervention group received a surgical video review guide created by the researcher using Gibb’s Reflective Cycle and additional evidence-based strategies during the video review. The participants also completed an exit survey measuring the perceived usefulness of video review guides. Data analysis found that overall, both groups significantly improved their surgical skills; however, there was no statistical difference between the two groups. The participants perceived both the surgical video review guide and video review guides in general as useful. Implications for practice and recommendations for future research were discussed. This research underscores the potential of reflective guides as a low-cost and independent method to develop reflective practitioners further and improve surgical practice.
|
2 |
Arcabouço para análise de eventos em vídeos. / Framework for analyzing events in videos.SILVA, Adson Diego Dionisio da. 07 May 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-07T15:29:04Z
No. of bitstreams: 1
ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5) / Made available in DSpace on 2018-05-07T15:29:04Z (GMT). No. of bitstreams: 1
ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5)
Previous issue date: 2015-08-31 / O reconhecimento automático de eventos de interesse em vídeos envolvendo
conjuntos de ações ou de interações entre objetos. Pode agregar valor a sistemas
de vigilância,aplicações de cidades inteligentes, monitoramento de pessoas com
incapacidades físicas ou mentais, dentre outros. Entretanto, conceber um arcabouço que possa ser adaptado a diversas situações sem a necessidade de um especialista nas tecnologias envolvidas, continua sendo um desafio para a área. Neste contexto, a pesquisa realizada tem como base a criação de um arcabouço genérico para detecção de eventos em vídeo com base em regras. Para criação das regras, os usuários formam expressões lógicas utilizando Lógica de Primeira Ordem e relacionamos termos com a álgebra de intervalos de Allen, adicionando assim um contexto temporal às regras. Por ser um arcabouço, ele é extensível, podendo receber módulos adicionais para realização de novas detecções e inferências Foi realizada uma avaliação experimental utilizando vídeos de teste disponíveis no site Youtube envolvendo um cenário de trânsito, com eventos de ultrapassagem do sinal vermelho e vídeos obtidos de uma câmera ao vivo do site Camerite, contendo eventos de carros estacionando. O foco do trabalho não foi
criar detectores de objetos (e.g. carros ou pessoas) melhores do que aqueles existentes no estado da arte, mas propor e desenvolver uma estrutura genérica e reutilizável que integra diferentes técnicas de visão computacional. A acurácia na detecção dos eventos ficou no intervalo de 83,82% a 90,08% com 95% de confiança. Obteve acurácia máxima (100%) na detecção dos eventos, quando substituído os detectores de objetos por rótulos atribuídos manualmente, o que indicou a eficácia do motor de inferência desenvolvido para o arcabouço. / Automatic recognition of relevant events in videos involving sets of actions or interactions between objects can improve surveillance systems, smart cities applications, monitoring of people with physical or mental disabilities, among others. However, designing a framework that can be adapted to several situations without an expert in the involved technologies remains a challenge. In this context, this work is based on the creation of a rule-based generic framework for event detection in video. To create the rules, users form logical expressions using firstorder logic (FOL) and relate the terms with the Allen’s interval algebra, adding a temporal context to the rules. Once it is a framework, it is extensible, and may receive additional modules for performing new detections and inferences. Experimental evaluation was performed using test videos available on Youtube, involving a scenario of traffic with red light crossing events and videos from
Camerite website containing parking car events. The focus of the work was not to create object detectors (e.g. cars or people) better than those existing in the state-of-the-art, but, propose and develop a generic and reusable framework that integrates differents computer vision techniques. The accuracy in the detection of the events was within the range of 83.82% and 90.08% with 95% confidence. Obtained maximum accuracy (100 %) in the detection of the events, when replacing the objects detectors by labels manually assigned, what indicated the effectiveness of the inference engine developed for this framework.
|
Page generated in 0.3163 seconds