Spelling suggestions: "subject:"egmentation"" "subject:"asegmentation""
271 |
Semi-Automatic Segmentation of Normal Female Pelvic Floor Structures from Magnetic Resonance ImagesLi, Xiaolong 11 February 2010 (has links)
No description available.
|
272 |
SUBURBAN LIFESTYLESNOVOSEL, BENJAMIN RYAN 07 July 2003 (has links)
No description available.
|
273 |
Eigenimage-based Robust Image Segmentation Using Level SetsMacenko, Marc D. 16 October 2006 (has links)
No description available.
|
274 |
Segmentation and clustering in neural networks for image recognitionJan, Ying-Wei January 1994 (has links)
No description available.
|
275 |
Compression and segmentation of three-dimensional echocardiographyHang, Xiyi 13 August 2004 (has links)
No description available.
|
276 |
THE USE OF ARTIFICIAL INTELLIGENCE FOR THE DEVELOPMENT AND VALIDATION OF A COMPUTER-AIDED ALGORITHM FOR THE SEGMENTATION OF LYMPH NODE FEATURES FROM THORACIC IMAGINGChurchill, Isabella January 2020 (has links)
Background- Mediastinal staging is the rate-limiting step prior to initiation of lung cancer treatment and is essential in identifying the most appropriate treatment for the patient. However, this process is often complex and involves multiple imaging modalities including invasive and non-invasive methods for the assessment of lymph nodes in the mediastinum which are error prone. The use of Artificial Intelligence may be able to provide more accurate and precise measurements and eliminate error associated with medical imaging.
Methods-This thesis was conducted in three parts. In Part 1, we synthesized and critically appraised the methodological quality of existing studies that use Artificial Intelligence to diagnosis and stage lung cancer from thoracic imaging based on lymph node features. In Part 2, we determined the inter-rater reliability of segmentation of the ultrasonographic lymph node features performed by an experienced endoscopist (manually) compared to NeuralSeg (automatically). In Part 3, we developed and validated a deep neural network through a clinical prediction model to determine if NeuralSeg could learn and identify ultrasonographic lymph node features from endobronchial ultrasound images in patients undergoing lung cancer staging.
Results- In Part 1, there were few studies in the Artificial Intelligence literature that provided a complete and detailed description of the design, Artificial Intelligence architecture, validation strategies and performance measures. In Part 2, NeuralSeg and the experienced endosonographer possessed excellent inter-rater correlation (Intraclass Correlation Coefficient = 0.76, 95% CI= 0.70 – 0.80, p<0.0001). In Part 3, NeuralSeg’s algorithm had an accuracy of 73.78% (95% CI: 68.40% to 78.68%), a sensitivity of 18.37% (95% CI: 8.76% to 32.02%) and specificity of 84.34% (95% CI: 79.22% to 88.62%).
Conclusions- Analysis of staging modalities for lung cancer using Artificial Intelligence may be useful for when results are inconclusive or uninterpretable by a human reader. NeuralSeg’s high specificity may inform decision-making regarding biopsy if results are benign. Prospective external validation of algorithms and direct comparisons through cut-off thresholds are required to determine their true predictive capability. Future work with a larger dataset will be required to improve and refine the algorithm prior to trials in clinical practice. / Thesis / Master of Science (MSc) / Before deciding on treatment for patients with lung cancer, a critical step in the investigation is finding out whether the lymph nodes in the chest contain cancer cells. This is accomplished through medical imaging of the lymph nodes or taking a biopsy of the lymph node tissue using a needle attached to a scope that is entered through the airway wall. The purpose of these tests is to ensure that lung cancer patients receive the optimal treatment option. However, imaging of the lymph nodes is heavily reliant on human interpretation, which can be error prone. We aimed to critically analyze and investigate the use of Artificial Intelligence to enhance clinician performance for image interpretation. We performed a search of the medical literature for the use of Artificial Intelligence to diagnosis lung cancer from medical imaging. We also taught a computer program, known as NeuralSeg, to learn and identify cancerous lymph nodes from ultrasound imaging. This thesis provides a significant contribution to the Artificial Intelligence literature and provides recommendations for future research.
|
277 |
Advancing Chart Question Answering with Robust Chart Component RecognitionZheng, Hanwen 13 August 2024 (has links)
The task of comprehending charts [1, 2, 3] presents significant challenges for machine learning models due to the diverse and intricate shapes of charts. The chart extraction task ensures the precise identification of key components, while the chart question answering (ChartQA) task integrates visual and textual information, facilitating accurate responses to queries based on the chart's content. To approach ChartQA, this research focuses on two main aspects. Firstly, we introduce ChartFormer, an integrated framework that simultaneously identifies and classifies every chart element. ChartFormer extends beyond traditional data visualization by identifying descriptive components such as the chart title, legend, and axes, providing a comprehensive understanding of the chart's content. ChartFormer is particularly effective for complex instance segmentation tasks that involve a wide variety of class objects with unique visual structures. It utilizes an end-to-end transformer architecture, which enhances its ability to handle the intricacies of diverse and distinct object features. Secondly, we present Question-guided Deformable Co-Attention (QDCAt), which facilitates multimodal fusion by incorporating question information into a deformable offset network and enhancing visual representation from ChartFormer through a deformable co-attention block. / Master of Science / Real-world data often encompasses multimodal information, blending textual descriptions with visual representations. Charts, in particular, pose a significant challenge for machine learning models due to their condensed and complex structure. Existing multimodal methods often neglect these graphics, failing to integrate them effectively. To address this gap, we introduce ChartFormer, a unified framework designed to enhance chart understanding through instance segmentation, and a novel Question-guided Deformable Co-Attention (QDCAt) mechanism. This approach seamlessly integrates visual and textual features for chart question answering (ChartQA), allowing for more comprehensive reasoning. ChartFormer excels at identifying and classifying chart components such as bars, lines, pies, titles, legends, and axes. The QDCAt mechanism further enhances multimodal fusion by aligning textual information with visual cues, thereby improving answer accuracy. By dynamically adjusting attention based on the question context, QDCAt ensures that the model focuses on the most relevant parts of the chart. Extensive experiments demonstrate that ChartFormer and QDChart significantly outperform their baseline models in chart component recognition and ChartQA tasks by 3.2% in mAP and 15.4% in accuracy, respectively, providing a robust solution for detailed visual data interpretation across various applications.
These results highlight the efficacy of our approach in providing a robust solution for detailed visual data interpretation, making it applicable to a wide range of domains, from scientific research to financial analysis and beyond.
|
278 |
Rôle de la syllabe dans l'intelligibilité de la parole en présentation alternée entre les oreillesJoubert, Sylviane January 1995 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
279 |
Deep Convolutional Neural Networks for Segmenting Unruptured Intracranial Aneurysms from 3D TOF-MRA ImagesBoonaneksap, Surasith 07 February 2022 (has links)
Despite facing technical issues (e.g., overfitting, vanishing and exploding gradients), deep neural networks have the potential to capture complex patterns in data. Understanding how depth impacts neural networks performance is vital to the advancement of novel deep learning architectures. By varying hyperparameters on two sets of architectures with different depths, this thesis aims to examine if there are any potential benefits from developing deep networks for segmenting intracranial aneurysms from 3D TOF-MRA scans in the ADAM dataset. / Master of Science / With the technologies we have today, people are constantly generating data. In this pool of information, gaining insight into the data proves to be extremely valuable. Deep learning is one method that allows for automatic pattern recognition by iteratively improving the disparity between its prediction and the ground truth. Complex models can learn complex patterns, and such models introduce challenges. This thesis explores the potential benefits of deep neural networks whether they stand to gain improvement despite the challenges. The models will be trained to segment intracranial aneurysms from volumetric images.
|
280 |
Adaptability and extensibility of deep neural networksPagé Fortin, Mathieu 28 June 2024 (has links)
L'apprentissage profond a considérablement gagné en popularité au cours de la dernière décennie grâce à sa capacité à développer des modèles puissants qui apprennent directement à partir de données non structurées. Cette approche a été appliquée avec succès à divers domaines tels que le traitement du langage naturel, la vision par ordinateur et le traitement des signaux, et le rythme des progrès réalisés par la recherche académique et industrielle ne cesse de s'accélérer. Cependant, la majorité des recherches suppose la disponibilité de grands ensembles de données d'entraînement statiques. Par exemple, de nombreuses techniques sont conçues pour améliorer les capacités de généralisation des modèles d'apprentissage profond en utilisant des bases de données comme MS-COCO qui contient environ 300K images, ImageNet avec environ 1,5M d'exemples, et Visual Genome avec environ 3,8M d'instances d'objets. Or, récolter et annoter de tels ensembles de données peut être trop coûteux pour de nombreuses applications réelles. De plus, il est généralement supposé que l'entraînement peut être effectué en une seule étape, considérant ainsi que toutes les classes sont disponibles simultanément. Cela diffère d'applications réelles où les cas d'utilisation peuvent évoluer pour inclure de nouvelles classes au fil du temps, induisant ainsi la nécessité d'adapter continuellement les modèles existants, et faisant ainsi de l'apprentissage continuel. Dans cette thèse, nous visons à contribuer à l'*adaptabilité* et à l'*extensibilité* des réseaux de neurones profonds par le biais de l'apprentissage à partir de peu d'exemples et de l'apprentissage continuel. Plus précisément, nous proposons une méthode d'apprentissage qui exploite des relations contextuelles et des représentations multimodales pour former de meilleurs prototypes de classe en se basant sur des connaissances préalables, permettant l'*adaptation* à de nouvelles tâches avec seulement quelques exemples. De plus, nous contribuons à l'apprentissage continuel de classes, qui vise à permettre aux modèles d'apprentissage profond d'*étendre* leurs connaissances en intégrant de nouveaux concepts sans perdre la capacité de résoudre les tâches précédemment apprises. Contrairement à la majorité des travaux précédents qui ont exploré l'apprentissage continuel dans un contexte de classification d'images sur des bases de données simples (p. ex. MNIST et CIFAR), nos méthodes contribuent à l'apprentissage continuel de la segmentation sémantique, la détection d'objets et la segmentation d'instances, qui sont des problèmes plus complexes mais aussi plus applicatifs. Pour la segmentation sémantique continuelle, nous proposons un module d'apprentissage faiblement supervisé afin d'aborder les problèmes de dérive de l'arrière-plan (*background shift*) et des coûts élevés d'annotation. Nous introduisons également deux variantes d'un mécanisme de répétition qui permet de rejouer des régions d'images ou des caractéristiques intermédiaires sous la forme d'une technique d'augmentation de données. Nous explorons ensuite l'apprentissage continuel de la détection d'objets et de la segmentation d'instances en développant une architecture dynamique et une nouvelle méthode de distillation des connaissances qui augmente la plasticité tout en préservant une bonne stabilité. Finalement, nous étudions l'apprentissage continuel de la détection d'objets dans le contexte d'applications agricoles telles que la détection de plantes et de maladies. Pour ce faire, nous adaptons deux bases de données publiques pour simuler des scénarios d'apprentissage continuel et nous comparons diverses méthodes, introduisant ainsi deux scénarios experimentaux de référence pour étudier la vision numérique appliquée à des problèmes agricoles. Ensemble, ces contributions abordent plusieurs défis en lien avec l'apprentissage à partir de peu d'exemples et avec l'apprentissage continuel, faisant ainsi progresser le développement de modèles adaptables capables d'élargir progressivement leur base de connaissances au fil du temps. De plus, nous mettons un accent particulier sur l'étude de ces problèmes dans des configurations expérimentales impliquant des scènes complexes, qui sont plus représentatives des applications réelles déployées dans des environnements de production. / Deep learning has gained tremendous popularity in the last decade thanks to its ability to develop powerful models directly by learning from unstructured data. It has been successfully applied to various domains such as natural language processing, computer vision and signal processing, and the rate of progress made by academic and industrial research is still increasing. However, the majority of research assumes the availability of large, static training datasets. For instance, techniques are often designed to improve the generalization capabilities of deep learning models using datasets like MS-COCO with approximately 300K images, ImageNet with around 1.5M examples, and Visual Genome with roughly 3.8M object instances. Gathering and annotating such large datasets can be too costly for many real-world applications. Moreover, it is generally assumed that training is performed in a single step, thereby considering that all classes are available simultaneously. This differs from real applications where use cases can evolve to include novel classes, thus inducing the necessity to continuously adapt existing models and thereby performing continual learning. In this thesis, we aim to contribute to the *adaptability* and *extensibility* of deep neural networks through learning from few examples and continual learning. Specifically, we propose a few-shot learning method which leverages contextual relations and multimodal representations to learn better class prototypes, allowing to *adapt* to novel tasks with only a few examples. Moreover, we contribute to continual learning, aiming to allow deep learning models to *extend* their knowledge by learning new classes without loosing the ability to solve previously learned tasks. Contrarily to the majority of previous work which explores continual image classification on simple datasets (e.g. MNIST and CIFAR), our methods contribute to semantic segmentation, object detection and instance segmentation, which are more complex and practical problems. For continual semantic segmentation, we propose a weakly-supervised learning module to address the problems of background shift and annotation costs. We also introduce two variants of a rehearsal mechanism that can replay image patches or intermediate features in the form of a data augmentation technique. We then explore continual object detection and continual instance segmentation by developing a dynamic architecture and a novel knowledge distillation method which increases plasticity while ensuring stability. Finally, we experiment class-incremental object detection within the context of agricultural applications such as plant and disease detection. For that, we adapt two public datasets to simulate continual learning scenarios and we compare various continual and non-continual learning methods, thereby introducing a novel benchmark to study agricultural problems. Together, these contributions address several challenges of few-shot learning and continual learning, thus advancing the development of adaptable models capable of gradually expanding their knowledge base over time. Moreover, we have put a particular emphasis to study these problems within experimental setups that involve complex scenes, which are more representative of real applications as deployed in production environments.
|
Page generated in 0.1012 seconds