• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Breast Abnormality Diagnosis Using Transfer and Ensemble Learning

Azour, Farnoosh 02 June 2022 (has links)
Breast cancer is the second fatal disease among cancers both in Canada and across the globe. However, in the case of early detection, it can raise the survival rate. Thus, researchers and scientists have been practicing to develop Computer-Aided Diagnosis (CAD)x systems. Traditional CAD systems depend on manual feature extraction, which has provided radiologists with poor detection and diagnosis tools. However, recently the application of Convolutional Neural Networks (CNN)s as one of the most impressive deep learning-based methods and one of its interesting techniques, Transfer Learning, has revolutionized the performance and development of these systems. In medical diagnosis, one issue is distinguishing between breast mass lesions and calcifications (little deposits of calcium). This work offers a solution using transfer learning and ensemble learning (majority voting) at the first stage and later replacing the voting strategy with soft voting. Also, regardless of the abnormality's type (mass or calcification), the severeness of the abnormality plays a key role. Nevertheless, in this study, we went further and made an effort to create a (CAD)x pathology diagnosis system. More specifically, after comparing multi-classification results with a two-staged abnormality diagnosis system, we propose the two-staged binary classifier as our final model. Thus, we offer a novel breast cancer diagnosis system using a wide range of pre-trained models in this study. To the best of our knowledge, we are the first who integrate the application of a wide range of state-of-the-art pre-trained models, particularly including EfficientNet for the transfer learning part, and subsequently, employ ensemble learning. With the application of pre-trained CNN-based models or transfer learning, we are able to overcome the lack of large-size datasets. Moreover, with the EfficientNet family offering better results with fewer parameters, we achieved promising results in terms of accuracy and AUC-score, and later ensemble learning was applied to provide robustness for the network. After performing 10-fold cross-validation, our experiments yielded promising results; while constructing the breast abnormality classifier 0.96 ± 0.03 and 0.96 for accuracy and AUC-score, respectively. Similarly, it resulted in 0.85 ± 0.08 for accuracy and 0.81 for AUC-score when constructing pathology diagnosis.
2

Assessing Code Quality and Performance in AI-Generated Code for Test Automation

Silva, Rafael January 2024 (has links)
Recent advancements in Artificial Intelligence (AI) have directly impacted and benefited many fields, such as Education, Healthcare, Entertainment, etc. Computer Science and Software Engineering are also fields that have been affected and benefited from these advances and today AI-powered services such as OpenAI’s ChatGPT, GitHub copilot and Hugging Face’s Huggingchat are widely used as aids to write, compare, or analyze source code for different types of applications. One lingering question about these services is how good they are in terms of code quality, standardization, and readiness to be used. In most cases source code retrieved from these services require modifications to fulfill their original purpose effectively.   This work presents an experiment with the aim of analyzing how state-of-the-art Large Language Models (LLMs) perform when generating test scripts for a target application. More specifically, we set up a controlled environment with a backend application - developed in Python - and used ten different large language models to generate test scripts for said backend application. Then, we evaluated the results using code metrics, as well as metrics related to test execution to see how good the generated test code was. For this, we used the following models: GPT3.5-turbo, GPT-4, GPT4.0-turbo, Codellama-70B, Google Gemma-7b-it, Llama2-13B, Llama2-70B, Mistral-7B, Mixtral8x7B and NeuralHermes2.5-7B.    The results of the experiment revealed that GPT4.0-turbo outperformed the other models both when the target application is fully working but also when we intentionally introduced bugs into the application. Although the experiments in this work were performed on a simple backend application, they show the performance of the selected models when it comes to specific code metrics for the simple scenario. Our intention is that this work will serve as an inspiration for further work and investigation, specifically to code metrics and coding standards within Automated Software Testing.

Page generated in 0.0266 seconds