• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 5
  • 3
  • 1
  • Tagged with
  • 36
  • 36
  • 15
  • 12
  • 12
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Isar Imaging And Motion Compensation

Kucukkilic, Talip 01 December 2006 (has links) (PDF)
In Inverse Synthetic Aperture Radar (ISAR) systems the motion of the target can be classified in two main categories: Translational Motion and Rotational Motion. A small degree of rotational motion is required in order to generate the synthetic aperture of the ISAR systems. On the other hand, the remaining part of the target&rsquo / s motion, that is any degree of translational motion and the large degree of rotational motion, degrades ISAR image quality. Motion compensation techniques focus on eliminating the effect of the targets&rsquo / motion on the ISAR images. In this thesis, ISAR image generation is discussed using both Conventional Fourier Based and Time-Frequency Based techniques. Standard translational motion compensation steps, Range and Doppler Tracking, are examined. Cross-correlation method and Dominant Scatterer Algorithm are employed for Range and Doppler tracking purposes, respectively. Finally, Time-Frequency based motion compensation is studied and compared with the conventional techniques. All of the motion compensation steps are examined using the simulated data. Stepped frequency waveforms are used in order to generate the required data of the simulations. Not only successful results, but also worst case examinations and lack of algorithms are also discussed with the examples.
22

Modèle de dégradation d’images de documents anciens pour la génération de données semi-synthétiques / Semi-synthetic ancient document image generation by using document degradation models

Kieu, Van Cuong 25 November 2014 (has links)
Le nombre important de campagnes de numérisation mises en place ces deux dernières décennies a entraîné une effervescence scientifique ayant mené à la création de nombreuses méthodes pour traiter et/ou analyser ces images de documents (reconnaissance d’écriture, analyse de la structure de documents, détection/indexation et recherche d’éléments graphiques, etc.). Un bon nombre de ces approches est basé sur un apprentissage (supervisé, semi supervisé ou non supervisé). Afin de pouvoir entraîner les algorithmes correspondants et en comparer les performances, la communauté scientifique a un fort besoin de bases publiques d’images de documents avec la vérité-terrain correspondante, et suffisamment exhaustive pour contenir des exemples représentatifs du contenu des documents à traiter ou analyser. La constitution de bases d’images de documents réels nécessite d’annoter les données (constituer la vérité terrain). Les performances des approches récentes d’annotation automatique étant très liées à la qualité et à l’exhaustivité des données d’apprentissage, ce processus d’annotation reste très largement manuel. Ce processus peut s’avérer complexe, subjectif et fastidieux. Afin de tenter de pallier à ces difficultés, plusieurs initiatives de crowdsourcing ont vu le jour ces dernières années, certaines sous la forme de jeux pour les rendre plus attractives. Si ce type d’initiatives permet effectivement de réduire le coût et la subjectivité des annotations, reste un certain nombre de difficultés techniques difficiles à résoudre de manière complètement automatique, par exemple l’alignement de la transcription et des lignes de texte automatiquement extraites des images. Une alternative à la création systématique de bases d’images de documents étiquetées manuellement a été imaginée dès le début des années 90. Cette alternative consiste à générer des images semi-synthétiques imitant les images réelles. La génération d’images de documents semi-synthétiques permet de constituer rapidement un volume de données important et varié, répondant ainsi aux besoins de la communauté pour l’apprentissage et l’évaluation de performances de leurs algorithmes. Dans la cadre du projet DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) financé par l’ANR (Agence Nationale de la Recherche), nous avons mené des travaux de recherche relatifs à la génération d’images de documents anciens semi-synthétiques. Le premier apport majeur de nos travaux réside dans la création de plusieurs modèles de dégradation permettant de reproduire de manière synthétique des déformations couramment rencontrées dans les images de documents anciens (dégradation de l’encre, déformation du papier, apparition de la transparence, etc.). Le second apport majeur de ces travaux de recherche est la mise en place de plusieurs bases d’images semi-synthétiques utilisées dans des campagnes de test (compétition ICDAR2013, GREC2013) ou pour améliorer par ré-apprentissage les résultats de méthodes de reconnaissance de caractères, de segmentation ou de binarisation. Ces travaux ont abouti sur plusieurs collaborations nationales et internationales, qui se sont soldées en particulier par plusieurs publications communes. Notre but est de valider de manière la plus objective possible, et en collaboration avec la communauté scientifique concernée, l’intérêt des images de documents anciens semi-synthétiques générées pour l’évaluation de performances et le ré-apprentissage. / In the last two decades, the increase in document image digitization projects results in scientific effervescence for conceiving document image processing and analysis algorithms (handwritten recognition, structure document analysis, spotting and indexing / retrieval graphical elements, etc.). A number of successful algorithms are based on learning (supervised, semi-supervised or unsupervised). In order to train such algorithms and to compare their performances, the scientific community on document image analysis needs many publicly available annotated document image databases. Their contents must be exhaustive enough to be representative of the possible variations in the documents to process / analyze. To create real document image databases, one needs an automatic or a manual annotation process. The performance of an automatic annotation process is proportional to the quality and completeness of these databases, and therefore annotation remains largely manual. Regarding the manual process, it is complicated, subjective, and tedious. To overcome such difficulties, several crowd-sourcing initiatives have been proposed, and some of them being modelled as a game to be more attractive. Such processes reduce significantly the price andsubjectivity of annotation, but difficulties still exist. For example, transcription and textline alignment have to be carried out manually. Since the 1990s, alternative document image generation approaches have been proposed including in generating semi-synthetic document images mimicking real ones. Semi-synthetic document image generation allows creating rapidly and cheaply benchmarking databases for evaluating the performances and trainingdocument processing and analysis algorithms. In the context of the project DIGIDOC (Document Image diGitisation with Interactive DescriptiOn Capability) funded by ANR (Agence Nationale de la Recherche), we focus on semi-synthetic document image generation adapted to ancient documents. First, we investigate new degradation models or adapt existing degradation models to ancient documents such as bleed-through model, distortion model, character degradation model, etc. Second, we apply such degradation models to generate semi-synthetic document image databases for performance evaluation (e.g the competition ICDAR2013, GREC2013) or for performance improvement (by re-training a handwritten recognition system, a segmentation system, and a binarisation system). This research work raises many collaboration opportunities with other researchers to share our experimental results with our scientific community. This collaborative work also helps us to validate our degradation models and to prove the efficiency of semi-synthetic document images for performance evaluation and re-training.
23

Latent Space Manipulation of GANs for Seamless Image Compositing

Fruehstueck, Anna 04 1900 (has links)
Generative Adversarial Networks (GANs) are a very successful method for high-quality image synthesis and are a powerful tool to generate realistic images by learning their visual properties from a dataset of exemplars. However, the controllability of the generator output still poses many challenges. We propose several methods for achieving larger and/or higher visual quality in GAN outputs by combining latent space manipulations with image compositing operations: (1) GANs are inherently suitable for small-scale texture synthesis due to the generator’s capability to learn image properties of a limited domain such as the properties of a specific texture type at a desired level of detail. A rich variety of suitable texture tiles can be synthesized from the trained generator. Due to the convolutional nature of GANs, we can achieve largescale texture synthesis by tiling intermediate latent blocks, allowing the generation of (almost) arbitrarily large texture images that are seamlessly merged. (2) We notice that generators trained on heterogeneous data perform worse than specialized GANs, and we demonstrate that we can optimize multiple independently trained generators in such a way that a specialized network can fill in high-quality details for specific image regions, or insets, of a lower-quality canvas generator. Multiple generators can collaborate to improve the visual output quality and through careful optimization, seamless transitions between different generators can be achieved. (3) GANs can also be used to semantically edit facial images and videos, with novel 3D GANs even allowing for camera changes, enabling unseen views of the target. However, the GAN output must be merged with the surrounding image or video in a spatially and temporally consistent way, which we demonstrate in our method.
24

muGen : Generative AI as Machinic Exploration of Cultural Archives / muGen : Generativ AI som maskinell utforskning av kulturarkiv

Yu, Yan January 2023 (has links)
In recent years, generative AI has quickly become a new creative and artistic tool that could challenge our understanding of the creative process and the role of the machine. Despite having exhibited visually promising results, images generated by AI tools present various challenges, most notably their tendency to display cultural, gender and racial biases. The objective of the project is to speculate on the concept and prototype of an alternative text-to-image generation system, designed to mitigate biases from linguistic and cultural differences, and facilitate diversity in machine creativity. muGen, the final design, is a fictional system that allows the user to generate images using data in different languages, while adding user controls such as time period to better associate user’s idea with the system. / Under de senaste åren har generativ AI snabbt blivit ett nytt kreativt och konstnärligt verktyg som kan utmana vår förståelse av den kreativa processen och maskinens roll. Trots att bilder som genererats av AI-verktyg har uppvisat visuellt lovande resultat finns det flera utmaningar, framför allt deras tendens att visa kulturella, köns- och rasmässiga partiskhet. Syftet med projektet är att spekulera kring konceptet och prototypen för ett alternativt text-till-bild-genereringssystem, utformat för att mildra partiskhet från språkliga och kulturella skillnader, och underlätta mångfald i maskinkreativitet. muGen, den slutliga designen, är ett fiktivt system som låter användaren generera bilder med hjälp av data på olika språk, samtidigt som det lägger till användarkontroller som tidsperiod för att bättre associera användarens idé med systemet.
25

Predictive MR Image Generation for Alzheimer’s Disease and Normal Aging Using Diffeomorphic Registration / Förutsägande generering av MR-bilder för Alzheimers sjukdom och normal åldrande med användning av diffeomorfisk registrering

Zheng, Yuqi January 2023 (has links)
Alzheimer´s Disease (AD) is the most prevalent cause of dementia, signifying a progressive and degenerative brain disorder that causes cognitive function deterioration including memory loss, communication difficulties, impaired judgment, and changes in behavior and personality. Compared to normal aging, AD introduces more profound cognitive impairments and brain morphology changes. Understanding these morphological changes associated with both normal aging and AD holds pivotal significance for the study of brain health. In recent years, the flourishing development of Artificial Intelligence (AI) has facilitated the analysis of medical images and the study of longitudinal brain morphology evolution. Numerous advanced AI-based frameworks have emerged to generate unbiased and realistic medical templates that represent the common characteristics within a cohort, providing valuable insights for cohort studies. Among these, Atlas-GAN is a state-of-the-art framework which can generate high-quality conditional deformable templates using diffeomorphic registration. However, cohort studies are not sufficient for individualized healthcare and treatment as each patient has a unique condition. Fortunately, the introduction of a mathematical mechanism, parallel transport, enables the inference of individual brain morphological evolution from cohort-level longitudinal templates. This project proposed an image generator that integrates the pole ladder, a tool for parallel transport implementation, into Atlas-GAN, to translate the cohort-level brain morphological evolution onto individual subjects, enabling the synthesis of anatomically plausible and personalized longitudinal Magnetic Resonance (MR) images based on one individual Magnetic Resonance Imaging (MRI) scan. In clinics, the synthesized images empower the physicians to retrospectively understand the patient's premorbid brain states and prospectively predict their brain morphology changes over time. Such capabilities are of paramount importance for the prognosis, diagnosis, and early-stage intervention of AD, especially given the current absence of a cure for AD. The primary contributions of this project include: (1) Introduction of an image generator that combines parallel transport with Atlas-GAN to synthesize individual longitudinal MR images for both the normal aging cohort and the cohort suffering from AD with both anatomical plausibility and preservation of individualized characteristics; (2) exploration into the prediction of individual longitudinal MR images in the case of an individual undergoing a state transition using the proposed generator; (3) conduction of both qualitative and quantitative evaluations and analyses for the synthesized images. / AD är den mest framträdande orsaken till demens och innebär en progressiv och degenerativ hjärnsjukdom som resulterar i kognitiv försämring, inklusive minnesförlust, kommunikationssvårigheter, nedsatt omdöme samt förändringar i beteende och personlighet. I jämförelse med normal åldrande introducerar AD mer djupgående kognitiva störningar och förändringar i hjärnans morfologi. Att förstå dessa morfologiska förändringar i samband med både normalt åldrande och AD har avgörande betydelse för studien av järnhälsa. De senaste årens blomstrande utveckling inom AI har underlättat analysen av medicinska bilder och studiet av långsiktig hjärnmorfologi. Flera avancerade AI-baserade ramverk har utvecklats för att generera opartiska och realistiska medicinska mallar som representerar gemensamma egenskaper inom en kohort och ger värdefulla insikter for kohortstudier. Bland dessa ar Atlas-GAN ett framstående ramverk som kan generera högkvalitativa, konditionellt deformabla mallar med hjälp av diffeomorfisk registrering. Dock ar kohortstudier inte tillräckliga för individualiserad sjukvård och behandling, eftersom varje patient har en unik situation. Som tur är möjliggör introduktionen av en matematisk mekanism, parallell transport, att man kan dra slutsatser om individuell hjärnmorfologisk utveckling från kohortbaserade longitudinella mallar. I detta projekt föreslogs en bildgenerator som integrerar pole ladder", ett verktyg for implementering av parallell transport, i Atlas- GAN. Detta möjliggör att kohortbaserad hjärnmorfologisk utveckling kan översättas till individnivå, vilket gör det möjligt att syntetisera anatomiskt trovärdiga och personifierade longitudinella MR-bilder baserade på en individs MRI-skanning. Inom kliniken gör de syntetiserade bilderna det möjligt för läkare att retrospektivt förstå patientens premorbida hjärnstatus och prospektivt förutsäga deras hjärnmorfologiska förändringar över tiden. Sådana möjligheter är av avgörande betydelse för prognos, diagnos och tidig intervention vid AD, särskilt med tanke på den nuvarande bristen på en botemedel för AD. De huvudsakliga bidragen från detta projekt inkluderar: (1) Introduktion av en bildgenerator som kombinerar parallell transport med Atlas-GAN för att syntetisera individuella longitudinella MR-bilder för både kohorten med normalt åldrande och kohorten som lider av AD, med både anatomisk trovärdighet och bevarande av individualiserade egenskaper. Dessutom har de genererade bilderna genomgått både kvalitativa och kvantitativa utvärderingar och analyser; (2) Utforskning av förutsägelse av individuella longitudinella MR-bilder i fallet när en individ genomgår en tillståndsövergång med hjälp av det föreslagna generatorn.
26

Live Cell Imaging Analysis Using Machine Learning and Synthetic Food Image Generation

Yue Han (18390447) 17 April 2024 (has links)
<p dir="ltr">Live cell imaging is a method to optically investigate living cells using microscopy images. It plays an increasingly important role in biomedical research as well as drug development. In this thesis, we focus on label-free mammalian cell tracking and label-free abnormally shaped nuclei segmentation of microscopy images. We propose a method to use a precomputed velocity field to enhance cell tracking performance. Additionally, we propose an ensemble method, Weighted Mask Fusion (WMF), combining the results of multiple segmentation models with shape analysis, to improve the final nuclei segmentation mask. We also propose an edge-aware Mask RCNN and introduce a hybrid architecture, an ensemble of CNNs and Swin-Transformer Edge Mask R-CNNs (HER-CNN), to accurately segment irregularly shaped nuclei of microscopy images. Our experiments indicate that our proposed method outperforms other existing methods for cell tracking and abnormally shaped nuclei segmentation.</p><p dir="ltr">While image-based dietary assessment methods reduce the time and labor required for nutrient analysis, the major challenge with deep learning-based approaches is that the performance is heavily dependent on the quality of the datasets. Challenges with food data include suffering from high intra-class variance and class imbalance. In this thesis, we present an effective clustering-based training framework named ClusDiff for generating high-quality and representative food images. From experiments, we showcase our method’s effectiveness in enhancing food image generation. Additionally, we conduct a study on the utilization of synthetic food images to address the class imbalance issue in long-tailed food classification.</p>
27

[pt] FCGAN: CONVOLUÇÕES ESPECTRAIS VIA TRANSFORMADA RÁPIDA DE FOURIER PARA CAMPO RECEPTIVOS DE ABRANGÊNCIA GLOBAL EM REDES ADVERSÁRIAS GENERATIVAS / [en] FCGAN: SPECTRAL CONVOLUTIONS VIA FFT FOR CHANNEL-WIDE RECEPTIVE FIELD IN GENERATIVE ADVERSARIAL NETWORKS

PEDRO HENRIQUE BARROSO GOMES 23 May 2024 (has links)
[pt] Esta dissertação propõe a Rede Generativa Adversarial por Convolução Rápida de Fourier (FCGAN). Essa abordagem inovadora utiliza convoluções no domínio da frequência para permitir que a rede opere com um campo receptivo de abrangência global. Devido aos seus campos receptivos pequenos, GANs baseadas em convoluções tradicionais enfrentam dificuldades para capturar padrões estruturais e geométricos. Nosso método utiliza Convoluções Rápidas de Fourier (FFCs), que usam Transformadas de Fourier para operar no domínio espectral, afetando globalmente os canais da imagem. Assim, a FCGAN é capaz de gerar imagens considerando informações de todas as localizações dos mapas de entrada. Essa nova característica da rede pode levar a um desempenho errático e instável. Mostramos que a utilização de normalização espectral e injeções de ruído estabilizam o treinamento adversarial. O uso de convoluções espectrais em redes convolucionais tem sido explorado para tarefas como inpainting e super-resolução de imagens. Este trabalho foca no seu potencial para geração de imagens. Nossos experimentos também sustentam a afirmação que features de Fourier são substitutos de baixo custo operacional para camadas de self-attention, permitindo que a rede aprenda informações globais desde camadas iniciais. Apresentamos resultados qualitativos e quantitativos para demonstrar que a FCGAN proposta obtém resultados comparáveis a abordagens estado-da-arte com profundidade e número de parâmetros semelhantes, alcançando um FID de 18,98 no CIFAR-10 e 38,71 no STL-10 - uma redução de 4,98 e 1,40, respectivamente. Além disso, em maiores dimensões de imagens, o uso de FFCs em vez de self-attention permite batch-sizes com até o dobro do tamanho, e iterações até 26 por cento mais rápidas. / [en] This thesis proposes the Fast Fourier Convolution Generative Adversarial Network (FCGAN). This novel approach employs convolutions in the frequency domain to enable the network to operate with a channel-wide receptive field. Due to small receptive fields, traditional convolution-based GANs struggle to capture structural and geometric patterns. Our method uses Fast Fourier Convolutions (FFCs), which use Fourier Transforms to operate in the spectral domain, affecting the feature input globally. Thus, FCGAN can generate images considering information from all feature locations. This new hallmark of the network can lead to erratic and unstable performance. We show that employing spectral normalization and noise injections stabilizes adversarial training. The use of spectral convolutions in convolutional networks has been explored for tasks such as image inpainting and super-resolution. This work focuses on its potential for image generation. Our experiments further support the claim that Fourier features are lightweight replacements for self-attention, allowing the network to learn global information from early layers. We present qualitative and quantitative results to demonstrate that the proposed FCGAN achieves results comparable to state-of-the-art approaches of similar depth and parameter count, reaching an FID of 18.98 on CIFAR-10 and 38.71 on STL-10 - a reduction of 4.98 and 1.40, respectively. Moreover, in larger image dimensions, using FFCs instead of self-attention allows for batch sizes up to twice as large and iterations up to 26 percent faster.
28

Segmentation and Deconvolution of Fluorescence Microscopy Volumes

Soonam Lee (6738881) 14 August 2019 (has links)
<div>Recent advances in optical microscopy have enabled biologists collect fluorescence microscopy volumes cellular and subcellular structures of living tissue. This results in collecting large datasets of microscopy volume and needs image processing aided automated quantification method. To quantify biological structures a first and fundamental step is segmentation. Yet, the quantitative analysis of the microscopy volume is hampered by light diffraction, distortion created by lens aberrations in different directions, complex variation of biological structures. This thesis describes several proposed segmentation methods to identify various biological structures such as nuclei or tubules observed in fluorescence microscopy volumes. To achieve nuclei segmentation, multiscale edge detection method and 3D active contours with inhomogeneity correction method are used for segmenting nuclei. Our proposed 3D active contours with inhomogeneity correction method utilizes 3D microscopy volume information while addressing intensity inhomogeneity across vertical and horizontal directions. To achieve tubules segmentation, ellipse model fitting to tubule boundary method and convolutional neural networks with inhomogeneity correction method are performed. More specifically, ellipse fitting method utilizes a combination of adaptive and global thresholding, potentials, z direction refinement, branch pruning, end point matching, and boundary fitting steps to delineate tubular objects. Also, the deep learning based method combines intensity inhomogeneity correction, data augmentation, followed by convolutional neural networks architecture. Moreover, this thesis demonstrates a new deconvolution method to improve microscopy image quality without knowing the 3D point spread function using a spatially constrained cycle-consistent adversarial networks. The results of proposed methods are visually and numerically compared with other methods. Experimental results demonstrate that our proposed methods achieve better performance than other methods for nuclei/tubules segmentation as well as deconvolution.</div>
29

TAIGA: uma abordagem para geração de dados de teste por meio de algoritmo genético para programas de processamento de imagens / TAIGA: an Approach to Test Image Generation for Image Processing Programs Using Genetic Algorithm

Rodrigues, Davi Silva 24 November 2017 (has links)
As atividades de teste de software são de crescente importância devido à maciça presença de sistemas de informação em nosso cotidiano. Programas de Processamento de Imagens (PI) têm um domínio de entrada bastante complexo e, por essa razão, o teste tradicional realizado com esse tipo de programa, conduzido majoritariamente de forma manual, é uma tarefa de alto custo e sujeita a imperfeições. No teste tradicional, em geral, as imagens de entrada são construídas manualmente pelo testador ou selecionadas aleatoriamente de bases de imagens, muitas vezes dificultando a revelação de defeitos no software. A partir de um mapeamento sistemático da literatura realizado, foi identificada uma lacuna no que se refere à geração automatizada de dados de teste no domínio de imagens. Assim, o objetivo desta pesquisa é propor uma abordagem - denominada TAIGA (Test imAge generatIon by Genetic Algorithm) - para a geração de dados de teste para programas de PI por meio de algoritmo genético. Na abordagem proposta, operadores genéticos tradicionais (mutação e crossover) são adaptados para o domínio de imagens e a função fitness é substituída por uma avaliação de resultados provenientes de teste de mutação. A abordagem TAIGA foi validada por meio de experimentos com oito programas de PI distintos, nos quais observaram-se ganhos de até 38,61% em termos de mutation score em comparação ao teste tradicional. Ao automatizar a geração de dados de teste, espera-se conferir maior qualidade ao desenvolvimento de sistemas de PI e contribuir com a diminuição de custos com as atividades de teste de software neste domínio / The massive presence of information systems in our lives has been increasing the importance of software test activities. Image Processing (IP) programs have very complex input domains and, therefore, the traditional testing for this kind of program is a highly costly and vulnerable to errors task. In traditional testing, usually, testers create images by themselves or they execute random selection from images databases, which can make it harder to reveal faults in the software under test. In this context, a systematic mapping study was conducted and a gap was identified concerning the automated test data generation in the images domain. Thus, an approach for generating test data for IP programs by means of genetic algorithms was proposed: TAIGA - Test imAge generatIon by Genetic Algorithm. This approach adapts traditional genetic operators (mutation and crossover) to the images domain and replaces the fitness function by the evaluation of the results of mutation testing. The proposed approach was validated by the execution of experiments involving eight distinct IP programs. TAIGA was able to provide up to 38.61% increase in mutation score when compared to the traditional testing for IP programs. It\'s expected that the automation of test data generation elevates the quality of image processing systems development and reduces the costs of software test activities in the images domain
30

TAIGA: uma abordagem para geração de dados de teste por meio de algoritmo genético para programas de processamento de imagens / TAIGA: an Approach to Test Image Generation for Image Processing Programs Using Genetic Algorithm

Davi Silva Rodrigues 24 November 2017 (has links)
As atividades de teste de software são de crescente importância devido à maciça presença de sistemas de informação em nosso cotidiano. Programas de Processamento de Imagens (PI) têm um domínio de entrada bastante complexo e, por essa razão, o teste tradicional realizado com esse tipo de programa, conduzido majoritariamente de forma manual, é uma tarefa de alto custo e sujeita a imperfeições. No teste tradicional, em geral, as imagens de entrada são construídas manualmente pelo testador ou selecionadas aleatoriamente de bases de imagens, muitas vezes dificultando a revelação de defeitos no software. A partir de um mapeamento sistemático da literatura realizado, foi identificada uma lacuna no que se refere à geração automatizada de dados de teste no domínio de imagens. Assim, o objetivo desta pesquisa é propor uma abordagem - denominada TAIGA (Test imAge generatIon by Genetic Algorithm) - para a geração de dados de teste para programas de PI por meio de algoritmo genético. Na abordagem proposta, operadores genéticos tradicionais (mutação e crossover) são adaptados para o domínio de imagens e a função fitness é substituída por uma avaliação de resultados provenientes de teste de mutação. A abordagem TAIGA foi validada por meio de experimentos com oito programas de PI distintos, nos quais observaram-se ganhos de até 38,61% em termos de mutation score em comparação ao teste tradicional. Ao automatizar a geração de dados de teste, espera-se conferir maior qualidade ao desenvolvimento de sistemas de PI e contribuir com a diminuição de custos com as atividades de teste de software neste domínio / The massive presence of information systems in our lives has been increasing the importance of software test activities. Image Processing (IP) programs have very complex input domains and, therefore, the traditional testing for this kind of program is a highly costly and vulnerable to errors task. In traditional testing, usually, testers create images by themselves or they execute random selection from images databases, which can make it harder to reveal faults in the software under test. In this context, a systematic mapping study was conducted and a gap was identified concerning the automated test data generation in the images domain. Thus, an approach for generating test data for IP programs by means of genetic algorithms was proposed: TAIGA - Test imAge generatIon by Genetic Algorithm. This approach adapts traditional genetic operators (mutation and crossover) to the images domain and replaces the fitness function by the evaluation of the results of mutation testing. The proposed approach was validated by the execution of experiments involving eight distinct IP programs. TAIGA was able to provide up to 38.61% increase in mutation score when compared to the traditional testing for IP programs. It\'s expected that the automation of test data generation elevates the quality of image processing systems development and reduces the costs of software test activities in the images domain

Page generated in 0.055 seconds