Spelling suggestions: "subject:"aynthetic 1mages"" "subject:"aynthetic demages""
1 |
Transdução e realidade híbrida em Avatar : uma experiência media assemblage /Rezende, Djaine Damiati. January 2010 (has links)
Orientador: Adenil Alfeu Domingos / Banca: Heloisa Helou Doca / Banca: Ana Silvia Lopes Davi Médola / Resumo: O filme Avatar (2009d), dirigido por James Cameron, trouxe inovações tecnológicas capazes de gerar efeitos visuais e sensoriais sem precedentes na história do cinema, além de promover, por meio das estratégias de construção de mundos e aspersão de conteúdos transmídia, efeitos imersivos análogos aos propostos pelas imagens corporificadas que emergem da tela durante a exibição do longametragem e que se caracterizam pelo uso da realidade aumentada. Essa combinação instaura um novo paradigma no âmbito da narrativa audiovisual adentrando o espaço híbrido da percepção, no que diz respeito tanto às fronteiras entre virtual e atual, quando entre real e ficcional, fenômeno a que chamamos aqui de media assemblage. Analisaremos as estratégias de sentido utilizadas dentro e fora do suporte cinematográfico, a fim de estabelecer relações entre a dialógica implícita no uso das tecnologias aplicadas ao processo de potencialização sensória e os efeitos de densidade conseguidos por meio da construção do universo narrativo multiplataformas, tendo como base as ideias de transdução e sinequismo desenvolvidas por Charles Sanders Peirce / Abstract: The Avatar movie directed by James Cameron (2009d) launched technological innovations able to generate never seen before visual and sensorial effects in movie industry and promote, through the strategies of world-building and sprinkling of transmedia content, immersive effects similar to those proposed by the embodied images that emerge from the screen during its displaying characterizing itself by the use of expanded realities. This combination establishes a new paradigm in the context of audiovisual narrative entering the hybrid space of perception, both with regard to the borders between virtual and actual as the real and fictional, a phenomenon that we called here by media assemblage. In this undertaking we analyze the meaning strategies used inside and outside of cinema medium, towards to establish a relation between the implicit dialogic of the technologies uses applied to the process of sensory potentializing and the density effects achieved through the construction of the narrative universe multi=platform, based on ideas of transducation and synechism developed by Charles Sanders Peirce / Mestre
|
2 |
Generating Synthetic X-rays Using Generative Adversarial NetworksHaiderbhai, Mustafa 24 September 2020 (has links)
We propose a novel method for generating synthetic X-rays from atypical inputs. This method creates approximate X-rays for use in non-diagnostic visualization problems where only generic cameras and sensors are available. Traditional methods are restricted to 3-D inputs such as meshes or Computed Tomography (CT) scans. We create custom synthetic X-ray datasets using a custom generator capable of creating RGB images, point cloud images, and 2-D pose images. We create a dataset using natural hand poses and train general-purpose Conditional Generative Adversarial Networks (CGANs) as well as our own novel network pix2xray. Our results show the successful plausibility of generating X-rays from point cloud and RGB images. We also demonstrate the superiority of our pix2xray approach, especially in the troublesome cases of occlusion due to overlapping or rotated anatomy. Overall, our work establishes a baseline that synthetic X-rays can be simulated using inputs such as RGB images and point cloud.
|
3 |
Synthesis of Thoracic Computer Tomography Images using Generative Adversarial NetworksHagvall Hörnstedt, Julia January 2019 (has links)
The use of machine learning algorithms to enhance and facilitate medical diagnosis and analysis is a promising and an important area, which could improve the workload of clinicians’ substantially. In order for machine learning algorithms to learn a certain task, large amount of data needs to be available. Data sets for medical image analysis are rarely public due to restrictions concerning the sharing of patient data. The production of synthetic images could act as an anonymization tool to enable the distribution of medical images and facilitate the training of machine learning algorithms, which could be used in practice. This thesis investigates the use of Generative Adversarial Networks (GAN) for synthesis of new thoracic computer tomography (CT) images, with no connection to real patients. It also examines the usefulness of the images by comparing the quantitative performance of a segmentation network trained with the synthetic images with the quantitative performance of the same segmentation network trained with real thoracic CT images. The synthetic thoracic CT images were generated using CycleGAN for image-to-image translation between label map ground truth images and thoracic CT images. The synthetic images were evaluated using different set-ups of synthetic and real images for training the segmentation network. All set-ups were evaluated according to sensitivity, accuracy, Dice and F2-score and compared to the same parameters evaluated from a segmentation network trained with 344 real images. The thesis shows that it was possible to generate synthetic thoracic CT images using GAN. However, it was not possible to achieve an equal quantitative performance of a segmentation network trained with synthetic data compared to a segmentation network trained with the same amount of real images in the scope of this thesis. It was possible to achieve equal quantitative performance of a segmentation network, as a segmentation network trained on real images, by training it with a combination of real and synthetic images, where a majority of the images were synthetic images and a minority were real images. By using a combination of 59 real images and 590 synthetic images, equal performance as a segmentation network trained with 344 real images was achieved regarding sensitivity, Dice and F2-score. Equal quantitative performance of a segmentation network could thus be achieved by using fewer real images together with an abundance of synthetic images, created at close to no cost, indicating a usefulness of synthetically generated images.
|
4 |
Image Analysis in Support of Computer-Assisted Cervical Cancer ScreeningMalm, Patrik January 2013 (has links)
Cervical cancer is a disease that annually claims the lives of over a quarter of a million women. A substantial number of these deaths could be prevented if population wide cancer screening, based on the Papanicolaou test, were globally available. The Papanicolaou test involves a visual review of cellular material obtained from the uterine cervix. While being relatively inexpensive from a material standpoint, the test requires highly trained cytology specialists to conduct the analysis. There is a great shortage of such specialists in developing countries, causing these to be grossly overrepresented in the mortality statistics. For the last 60 years, numerous attempts at constructing an automated system, able to perform the screening, have been made. Unfortunately, a cost-effective, automated system has yet to be produced. In this thesis, a set of methods, aimed to be used in the development of an automated screening system, are presented. These have been produced as part of an international cooperative effort to create a low-cost cervical cancer screening system. The contributions are linked to a number of key problems associated with the screening: Deciding which areas of a specimen that warrant analysis, delineating cervical cell nuclei, rejecting artefacts to make sure that only cells of diagnostic value are included when drawing conclusions regarding the final diagnosis of the specimen. Also, to facilitate efficient method development, two methods for creating synthetic images that mimic images acquired from specimen are described.
|
5 |
Transdução e realidade híbrida em Avatar: uma experiência media assemblageRezende, Djaine Damiati [UNESP] 13 October 2010 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:24:05Z (GMT). No. of bitstreams: 0
Previous issue date: 2010-10-13Bitstream added on 2014-06-13T18:51:33Z : No. of bitstreams: 1
rezende_dd_me_bauru.pdf: 1760641 bytes, checksum: 9141d73abfc5b41f632db44b5b74c3a4 (MD5) / Universidade Estadual Paulista (UNESP) / O filme Avatar (2009d), dirigido por James Cameron, trouxe inovações tecnológicas capazes de gerar efeitos visuais e sensoriais sem precedentes na história do cinema, além de promover, por meio das estratégias de construção de mundos e aspersão de conteúdos transmídia, efeitos imersivos análogos aos propostos pelas imagens corporificadas que emergem da tela durante a exibição do longametragem e que se caracterizam pelo uso da realidade aumentada. Essa combinação instaura um novo paradigma no âmbito da narrativa audiovisual adentrando o espaço híbrido da percepção, no que diz respeito tanto às fronteiras entre virtual e atual, quando entre real e ficcional, fenômeno a que chamamos aqui de media assemblage. Analisaremos as estratégias de sentido utilizadas dentro e fora do suporte cinematográfico, a fim de estabelecer relações entre a dialógica implícita no uso das tecnologias aplicadas ao processo de potencialização sensória e os efeitos de densidade conseguidos por meio da construção do universo narrativo multiplataformas, tendo como base as ideias de transdução e sinequismo desenvolvidas por Charles Sanders Peirce / The Avatar movie directed by James Cameron (2009d) launched technological innovations able to generate never seen before visual and sensorial effects in movie industry and promote, through the strategies of world-building and sprinkling of transmedia content, immersive effects similar to those proposed by the embodied images that emerge from the screen during its displaying characterizing itself by the use of expanded realities. This combination establishes a new paradigm in the context of audiovisual narrative entering the hybrid space of perception, both with regard to the borders between virtual and actual as the real and fictional, a phenomenon that we called here by media assemblage. In this undertaking we analyze the meaning strategies used inside and outside of cinema medium, towards to establish a relation between the implicit dialogic of the technologies uses applied to the process of sensory potentializing and the density effects achieved through the construction of the narrative universe multi=platform, based on ideas of transducation and synechism developed by Charles Sanders Peirce
|
6 |
Anticurtaining - obrazov filtr pro elektronovou mikroskopii / Anticurtaining - Image Filter for Electron MicroscopyDvok, Martin January 2021 (has links)
Tomographic analysis produces 3D images of examined material in nanoscale by focus ion beam (FIB). This thesis presents new approach to elimination of the curtain effect by machine learning method. Convolution neuron network is proposed for elimination of damaged imagine by the supervised learning technique. Designed network deals with features of damaged image, which are caused by wavelet transformation. The outcome is visually clear image. This thesis also designs creation of synthetic data set for training the neuron network which are created by simulating physical process of the creation of the real image. The simulation is made of creation of examined material by milling which is done by FIB and by process displaying of the surface by electron microscope (SEM). This newly created approach works precisely with real images. The qualitative evaluation of results is done by amateurs and experts of this problematic. It is done by anonymously comparing this solution to another method of eliminating curtaining effect. Solution presents new and promising approach to elimination of curtaining effect and contributes to a better procedure of dealing with images which are created during material analysis.
|
7 |
3D digitalizacija površi bez karakterističnih obeležja primenom blisko-predmetne fotogrametrije / 3D digitization of texture-less surfaces using close-range photogrammetrySantoši Željko 02 October 2020 (has links)
<p>Kreiranje 3D modela i njihova vizuelizacija postali su sastavni deo procesa razvoja novih ili redizajniranja postojećih proizvoda. U ovom istraživanju pažnja je posvećena rešavanju problema 3D digitalizacije kod blisko-predmetne fotogrametrije zasnovane na određivanju strukture iz kretanja, na površima bez karakterističnih obeležja primenom projektovanja sintetički generisanih slika u vidu svetlosnih tekstura. Akcenat je stavljen na generisanje novih sintetičkih slika koje imaju izraženu vizuelnu teksturu, njihovu evaluaciju i primenu na objektima sa monotonim vizuelnim površima sa ciljem podizanja ukupne tačnosti rekonstruisanih 3D modela. Verifikacija primene sintetičkih slika i njihovih svetlosnih tekstura sa aspekta geometrijske i dimenzione tačnosti je realizovana primenom računarom podržane inspekcije (CAD inspekcije).</p> / <p>The creation of 3D models and their visualization have become an integral part of the process of developing new or redesigning existing products. In this research, attention was paid to solving the problem of 3D digitization in close-range photogrammetry based on the structure from motion on surfaces without characteristic features by designing synthetically generated images in the form of light textures. An accent is placed on the generation of new synthetic images that have a pronounced visual texture, their evaluation, and application on objects with monotonous visual surfaces with the aim of raising the overall accuracy of reconstructed 3D models. The verification of the application of synthetic images and their light textures from the aspect of geometric and dimensional accuracy was realized through the use of computer-aided inspection (CAI inspection).</p>
|
8 |
Leveraging Synthetic Images with Domain-Adversarial Neural Networks for Fine-Grained Car Model ClassificationSmith, Dayyan January 2021 (has links)
Supervised learning methods require vast amounts of annotated images to successfully train an image classifier. Acquiring the necessary annotated images is costly. The increased availability of photorealistic computer generated images that are annotated automatically begs the question under which conditions it is possible to leverage this synthetic data during training. We investigate the conditions that make it possible to leverage computer generated renders of car models for fine-grained car model classification. / Övervakade inlärningsmetoder kräver stora mängder kommenterade bilder för att framgångsrikt träna en bildklassificator. Det är kostsamt att skaffa de nödvändiga bilderna med kommentarer. Den ökade tillgången till fotorealistiska datorgenererade bilder som kommenteras automatiskt väcker frågan om under vilka förhållanden det är möjligt att utnyttja dessa syntetiska data vid träning. Vi undersöker vilka villkor som gör det möjligt att utnyttja datorgenererade renderingar av bilmodeller för finkornig klassificering av bilmodeller.
|
9 |
Quantification du mouvement et de la déformation cardiaques à partir d'IRM marquée tridimensionnelle sur des données acquises par des imageurs Philips / Quantification of cardiac motion and deformation from 3D tagged MRI acquired by Philips imaging devicesZhou, Yitian 03 July 2017 (has links)
Les maladies cardiovasculaires sont parmi les principales causes de mortalité à l’échelle mondiale. Un certain nombre de maladies cardiaques peuvent être identifiées et localisées par l’analyse du mouvement et de la déformation cardiaques à partir de l’imagerie médicale. Cependant, l’utilisation de ces techniques en routine clinique est freinée par le manque d’outils de quantification efficaces et fiables. Dans cette thèse, nous introduisons un algorithme de quantification appliqué aux images IRM marquées. Nous présentons ensuite un pipeline de simulation qui génère des séquences cardiaques synthétiques (US et IRM). Les principales contributions sont décrites ci-dessous. Tout d’abord, nous avons proposé une nouvelle extension 3D de la méthode de la phase harmonique. Le suivi de flux optique en utilisant la phase a été combiné avec un modèle de régularisation anatomique afin d’estimer les mouvements cardiaques à partir des images IRM marquées. En particulier, des efforts ont été faits pour assurer une estimation précise de la déformation radiale en imposant l’incompressibilité du myocarde. L’algorithme (dénommé HarpAR) a ensuite été évalué sur des volontaires sains et des patients ayant différents niveaux d’ischémie. HarpAR a obtenu la précision de suivi comparable à quatre autres algorithmes de l’état de l’art. Sur les données cliniques, la dispersion des déformations est corrélée avec le degré de fibroses. De plus, les segments ischémiques sont distingués des segments sains en analysant les courbes de déformation. Deuxièmement, nous avons proposé un nouveau pipeline de simulation pour générer des séquences synthétiques US et IRM pour le même patient virtuel. Les séquences réelles, un modèle électromécanique (E/M) et les simulateurs physiques sont combinés dans un cadre unifié pour générer des images synthétiques. Au total, nous avons simulé 18 patients virtuels, chacun avec des séquences synthétiques IRM cine, IRM marquée et US 3D. Les images synthétiques ont été évaluées qualitativement et quantitativement. Elles ont des textures d’images réalistes qui sont similaires aux acquisitions réelles. De plus, nous avons également évalué les propriétés mécaniques des simulations. Les valeurs de la fraction d’éjection et de la déformation locale sont cohérentes avec les valeurs de référence publiées dans la littérature. Enfin, nous avons montré une étude préliminaire de benchmarking en utilisant les images synthétiques. L'algorithme générique gHarpAR a été comparé avec un autre algorithme générique SparseDemons en termes de précision sur le mouvement et la déformation. Les résultats montrent que SparseDemons surclasse gHarpAR en IRM cine et US. En IRM marquée, les deux méthodes ont obtenu des précisions similaires sur le mouvement et deux composants de déformations (circonférentielle et longitudinale). Toutefois, gHarpAR estime la déformation radiale de manière plus précise, grâce à la contrainte d’incompressibilité du myocarde. / Cardiovascular disease is one of the major causes of death worldwide. A number of heart diseases can be diagnosed through the analysis of cardiac images after quantifying shape and function. However, the application of these deformation quantification algorithms in clinical routine is somewhat held back by the lack of a solid validation. In this thesis, we mainly introduce a fast 3D tagged MR quantification algorithm, as well as a novel pipeline for generating synthetic cardiac US and MR image sequences for validation purposes. The main contributions are described below. First, we proposed a novel 3D extension of the well-known harmonic phase tracking method. The point-wise phase-based optical flow tracking was combined with an anatomical regularization model in order to estimate anatomically coherent myocardial motions. In particular, special efforts were made to ensure a reasonable radial strain estimation by enforcing myocardial incompressibility through the divergence theorem. The proposed HarpAR algorithm was evaluated on both healthy volunteers and patients having different levels of ischemia. On volunteer data, the tracking accuracy was found to be as accurate as the best candidates of a recent benchmark. On patient data, strain dispersion was shown to correlate with the extent of transmural fibrosis. Besides, the ischemic segments were distinguished from healthy ones from the strain curves. Second, we proposed a simulation pipeline for generating realistic synthetic cardiac US, cine and tagged MR sequences from the same virtual subject. Template sequences, a state-of-the-art electro-mechanical (E/M) model and physical simulators were combined in a unified framework for generating image data. In total, we simulated 18 virtual patients (3 healthy, 3 dyssynchrony and 12 ischemia), each with synthetic sequences of 3D cine MR, US and tagged MR. The synthetic images were assessed both qualitatively and quantitatively. They showed realistic image textures similar to real acquisitions. Besides, both the ejection fraction and regional strain values are in agreement with reference values published in the literature. Finally, we showed a preliminary benchmarking study using the synthetic database. We performed a comparison between gHarpAR and another tracking algorithm SparseDemons using the virtual patients. The results showed that SparseDemons outperformed gHarpAR in processing cine MR and US images. Regarding tagged MR, both methods obtained similar accuracies on motion and two strain components (circumferential and longitudinal). However, gHarpAR quantified radial strains more accurately, thanks to the myocardial incompressibility constraint. We conclude that motion quantification solutions can be improved by designing them according to the image characteristics of the modality and that a solid evaluation framework can be a key asset in comparing different algorithmic options.
|
10 |
Contributions to Engineering Big Data Transformation, Visualisation and Analytics. Adapted Knowledge Discovery Techniques for Multiple Inconsistent Heterogeneous Data in the Domain of Engine TestingJenkins, Natasha N. January 2022 (has links)
In the automotive sector, engine testing generates vast data volumes that
are mainly beneficial to requesting engineers. However, these tests are often
not revisited for further analysis due to inconsistent data quality and
a lack of structured assessment methods. Moreover, the absence of a tailored
knowledge discovery process hinders effective preprocessing, transformation,
analytics, and visualization of data, restricting the potential for
historical data insights. Another challenge arises from the heterogeneous
nature of test structures, resulting in varying measurements, data types,
and contextual requirements across different engine test datasets.
This thesis aims to overcome these obstacles by introducing a specialized
knowledge discovery approach for the distinctive Multiple Inconsistent
Heterogeneous Data (MIHData) format characteristic of engine testing.
The proposed methods include adapting data quality assessment and reporting,
classifying engine types through compositional features, employing modified dendrogram similarity measures for classification, performing customized feature extraction, transformation, and structuring, generating and manipulating synthetic images to enhance data visualization, and
applying adapted list-based indexing for multivariate engine test summary
data searches.
The thesis demonstrates how these techniques enable exploratory analysis,
visualization, and classification, presenting a practical framework to
extract meaningful insights from historical data within the engineering
domain. The ultimate objective is to facilitate the reuse of past data resources,
contributing to informed decision-making processes and enhancing
comprehension within the automotive industry. Through its focus on
data quality, heterogeneity, and knowledge discovery, this research establishes
a foundation for optimized utilization of historical Engine Test Data
(ETD) for improved insights. / Soroptimist International Bradford
|
Page generated in 0.0632 seconds