• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 19
  • 18
  • 18
  • 16
  • 16
  • 12
  • 11
  • 11
  • 10
  • 8
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Approche théorique et expérimentale du comportement électro-optique des systèmes polymères/cristaux liquides / Theoretical and experimental approach of the electro-optical behaviour of polymer/liquid crystal systems

Benaissa, Djamila 24 November 2009 (has links)
Une étude des matériaux à base de polymères et de cristaux liquides de type PDLC (pour Polymer Dispersed Liquid Crystals), élaborés par la méthode de séparation de phases induite par rayonnement ultraviolet (UV), a été effectuée pour des mélanges comportant le cristal liquide nématique E7 et le monomère tripropylèneglycoldiacrylate (TPGDA). Ces matériaux possèdent des fonctionnalités électro-optiques intéressantes notamment dans les vitrages à transparence contrôlée. Une analyse par spectroscopie infrarouge des réseaux de polymères élaborés a permis de déduire qu’une conversion quasi-totale des fonctions réactives de type acrylique du monomère est obtenue pour un mélange contenant 70% de cristal liquide et 30% de monomère. Ces matériaux ont été, ensuite, étudiés par différentes techniques de caractérisation, telles que la calorimétrie différentielle, la microscopie optique à lumière polarisée, la microscopie électronique à balayage, et la spectroscopie UV-visible, qui ont permis d’obtenir des renseignements sur les propriétés thermophysiques, morphologiques, et spectrales de ces systèmes.Une modélisation de la réponse électro-optique des films PDLC a été effectuée en utilisant un modèle simple, basé sur une hiérarchie de paramètres d’ordre. Ce modèle, dont les calculs sont effectués dans deux approximations théoriques (RGA et ADA), a donné une description convenable du comportement électro-optique de ces systèmes complexes. L’étude menée sur la confrontation de ce modèle à l’expérience a permis l’obtention d’un certain nombre de résultats intéressants qui sont utiles à la compréhension et à l’amélioration de la réponse électro-optique des films PDLC. / A study of materials based on polymers and liquid crystals of type PDLC (for Polymer Dispersed Liquid Crystals), elaborated by the method of phase separation induced by ultraviolet radiation (UV), was carried out for mixtures containing the nematic liquid crystal E7 and the monomer tripropyleneglycoldiacrylate (TPGDA). These materials possess interesting electro-optical features in particular for privacy windows with controlled transparency.An analysis by infrared spectroscopy of the elaborated polymer networks allowed to deduce that a quasi-total conversion of the reactive acrylic functions of the monomer was obtained for a mixture containing 70 % of liquid crystal and 30 % of monomer. These materials were, then, studied by various techniques of characterization, such as differential scanning calorimetry, polarized optical microscopy, scanning electron microscopy, and UV-visible spectroscopy, who allowed to obtain informations on the thermophysical, morphological, and spectroscopical properties of these systems.An attempt was made to rationalize the electro-optical response of PDLC films by using a simple model, based on a hierarchy of order parameters. Using the theoretical description from two theoretical approximations (RGA and ADA), this model gave a proper description of the electro-optical behaviour of these complex systems. The study led on the confrontation of this model to experimental results, allowing to obtain some interesting results which are useful in understanding and improvement of the electro-optical response of PDLC films.
12

Zero Shot Learning for Visual Object Recognition with Generative Models

January 2020 (has links)
abstract: Visual object recognition has achieved great success with advancements in deep learning technologies. Notably, the existing recognition models have gained human-level performance on many of the recognition tasks. However, these models are data hungry, and their performance is constrained by the amount of training data. Inspired by the human ability to recognize object categories based on textual descriptions of objects and previous visual knowledge, the research community has extensively pursued the area of zero-shot learning. In this area of research, machine vision models are trained to recognize object categories that are not observed during the training process. Zero-shot learning models leverage textual information to transfer visual knowledge from seen object categories in order to recognize unseen object categories. Generative models have recently gained popularity as they synthesize unseen visual features and convert zero-shot learning into a classical supervised learning problem. These generative models are trained using seen classes and are expected to implicitly transfer the knowledge from seen to unseen classes. However, their performance is stymied by overfitting towards seen classes, which leads to substandard performance in generalized zero-shot learning. To address this concern, this dissertation proposes a novel generative model that leverages the semantic relationship between seen and unseen categories and explicitly performs knowledge transfer from seen categories to unseen categories. Experiments were conducted on several benchmark datasets to demonstrate the efficacy of the proposed model for both zero-shot learning and generalized zero-shot learning. The dissertation also provides a unique Student-Teacher based generative model for zero-shot learning and concludes with future research directions in this area. / Dissertation/Thesis / Masters Thesis Computer Science 2020
13

A Deep Learning Approach to Brain Tracking of Sound

Hermansson, Oscar January 2022 (has links)
Objectives: Development of accurate auditory attention decoding (AAD) algorithms, capable of identifying the attended sound source from the speech evoked electroencephalography (EEG) responses, could lead to new solutions for hearing impaired listeners: neuro-steered hearing aids. Many of the existing AAD algorithms are either inaccurate or very slow. Therefore, there is a need to develop new EEG-based AAD methods. The first objective of this project was to investigate deep neural network (DNN) models for AAD and compare them to the state-of-the-art linear models. The second objective was to investigate whether generative adversarial networks (GANs) could be used for speech-evoked EEGdata augmentation to improve the AAD performance. Design: The proposed methods were tested in a dataset of 34 participants who performed an auditory attention task. They were instructed to attend to one of the two talkers in the front and ignore the talker on the other side and back-ground noise behind them, while high density EEG was recorded. Main Results: The linear models had an average attended vs ignored speech classification accuracy of 95.87% and 50% for ∼30 second and 8 seconds long time windows, respectively. A DNN model designed for AAD resulted in an average classification accuracy of 82.32% and 58.03% for ∼30 second and 8 seconds long time windows, respectively, when trained only on the real EEG data. The results show that GANs generated relatively realistic speech-evoked EEG signals. A DNN trained with GAN-generated data resulted in an average accuracy 90.25% for 8 seconds long time windows. On shorter trials the GAN-generated EEG data have shown to significantly improve classification performances, when compared to models only trained on real EEG data. Conclusion: The results suggest that DNN models can outperform linear models in AAD tasks, and that GAN-based EEG data augmentation can be used to further improve DNN performance. These results extend prior work and brings us closer to the use of EEG for decoding auditory attention in next-generation neuro-steered hearing aids.
14

The impact of AI on branding elements : Opportunities and challenges as seen by branding and IT specialists

Sabbar, Alfedaa, Nygren Gustafsson, Lina January 2021 (has links)
Background: The usage of AI is becoming increasingly necessary in almost every industry, including marketing and branding. AI can help managers, marketers and designers in the marketing and branding sectors to overcome realistic and practical challenges by providing data-driven results. These results could be used in making decisions. Nevertheless, implementing AI systems and the acceptance of it varies widely across different industries, with building brands is still behind.  Purpose: This research aims to develop a deeper understanding of why AI systems are not yet commonly used in the branding industry with emphasis on how it could be useful. As a result, the main opportunities and threats to the usage of AI in branding as seen by branding- and IT specialists are explored and expressed.  Method: To achieve the purpose of this study, a qualitative study was conducted. Semi-structured interviews were used as means to collect primary data and in total 15 interviews with branding and IT specialists were carried out. The data was transcribed and analyzed according to thematic analysis which emerged in four main themes.  Conclusion: The results show that AI is capable of creating brand elements, with limitations to mostly non-visual brand elements due to the lack of creativity and emotions in AI solutions. The findings indicate that the perceived possibilities of implementing AI in branding mostly are cost- and time-related since AI tends to be capable of solving tasks which are cost- and time-consuming. Furthermore, the perceived threats mainly involve i) losing a job or ii) intrude on the roles of branding professionals.
15

UNCERTAINTY, EDGE, AND REVERSE-ATTENTION GUIDED GENERATIVE ADVERSARIAL NETWORK FOR AUTOMATIC BUILDING DETECTION IN REMOTELY SENSED IMAGES

Somrita Chattopadhyay (12210671) 18 April 2022 (has links)
Despite recent advances in deep-learning based semantic segmentation, automatic building detection from remotely sensed imagery is still a challenging problem owing to large variability in the appearance of buildings across the globe. The errors occur mostly around the boundaries of the building footprints, in shadow areas, and when detecting buildings whose exterior surfaces have reflectivity properties that are very similar to those of the surrounding regions. To overcome these problems, we propose a generative adversarial network based segmentation framework with uncertainty attention unit and refinement module embedded in the generator. The refinement module, composed of edge and reverse attention units, is designed to refine the predicted building map. The edge attention enhances the boundary features to estimate building boundaries with greater precision, and the reverse attention allows the network to explore the features missing in the previously estimated regions. The uncertainty attention unit assists the network in resolving uncertainties in classification. As a measure of the power of our approach, as of January 5, 2022, it ranks at the second place on DeepGlobe’s public leaderboard despite the fact that main focus of our approach — refinement of the building edges — does not align exactly with the metrics used for leaderboard rankings. Our overall F1-score on DeepGlobe’s challenging dataset is 0.745. We also report improvements on the previous-best results for the challenging INRIA Validation Dataset for which our network achieves an overall IoU of 81.28% and an overall accuracy of 97.03%. Along the same lines, for the official INRIA Test Dataset, our network scores 77.86% and 96.41% in overall IoU and accuracy. We have also improved upon the previous best results on two other datasets: For the WHU Building Dataset, our network achieves 92.27% IoU, 96.73% precision, 95.24% recall and 95.98% F1-score. And, finally, for the Massachusetts Buildings Dataset, our network achieves 96.19% relaxed IoU score and 98.03% relaxed F1-score over the previous best scores of 91.55% and 96.78% respectively, and in terms of non-relaxed F1 and IoU scores, our network outperforms the previous best scores by 2.77% and 3.89% respectively.
16

Analysis of Artifact Formation and Removal in GAN Training

Hackney, Daniel 05 June 2023 (has links)
No description available.
17

GANs in the Process of Art Creation : Exploring the Potential of ML in Preserving the Traditional Style of Saudi Arabia Art and Craft Through Participatory Museum Experience.

Patrzalek, Roksana January 2023 (has links)
This project explores the role of GANs (Generative Adversarial Networks) in the process of art creation with a focus on traditional art and craft of Saudi Arabia. It introduces a concept for participatory museum experience where visitors are able to interact with an Artificial Intelligence (AI)  generative tool to create their own piece of traditional Saudi Arabia art.  This study investigates different types of GANs models that can be used to make the traditional art creation more accessible and attractive to the younger generation by introducing the possibilities of emerging technology. At the same time, it analyzes potential limitations and concerns that such fast developing technology carries. Within the big scope of this project including technology research, cultural studies regarding Saudi Arabia art and craft, training AI models and iterative prototyping, the research focuses on looking at the AI-powered services through the lenses of User Experience (UX). UX studies and corresponding methodologies from the field are used to explore the quality of the interactions between the user (visitor) and the AI system. Based on the performed design process, the outcome proposes a screen based image generation tool which utilizes a visual programming approach to interface by visualizing the generation path along with the data flow and allowing the user to connect generated images in order to create new content. Presented solution introduces an alternative approach to the design of image generators where users can follow the creation path from the first prompt to the final image.
18

Using Generative Adversarial Networks to Classify Structural Damage Caused by Earthquakes

Delacruz, Gian P 01 June 2020 (has links) (PDF)
The amount of structural damage image data produced in the aftermath of an earthquake can be staggering. It is challenging for a few human volunteers to efficiently filter and tag these images with meaningful damage information. There are several solution to automate post-earthquake reconnaissance image tagging using Machine Learning (ML) solutions to classify each occurrence of damage per building material and structural member type. ML algorithms are data driven; improving with increased training data. Thanks to the vast amount of data available and advances in computer architectures, ML and in particular Deep Learning (DL) has become one of the most popular image classification algorithms producing results comparable to and in some cases superior to human experts. These kind of algorithms need the input images used for the training to be labeled, and even if there is a large amount of images most of them are not labeled and it takes structural engineers a large amount of time to do it. The current data earthquakes image data bases do not contain the label information or is incomplete slowing significantly the advance of a solution and are incredible difficult to search. To be able to train a ML algorithm to classify one of the structural damages it took the architecture school an entire year to gather 200 images of the specific damage. That number is clearly not enough to avoid overfitting so for this thesis we decided to generate synthetic images for the specific structural damage. In particular we attempt to use Generative Adversarial Neural Networks (GANs) to generate the synthetic images and enable the fast classification of rail and road damage caused by earthquakes. Fast classification of rail and road damage can allow for the safety of people and to better prepare the reconnaissance teams that manage recovery tasks. GANs combine classification neural networks with generative neural networks. For this thesis we will be combining a convolutional neural network (CNN) with a generative neural network. By taking a classifier trained in a GAN and modifying it to classify other images the classifier can take advantage of the GAN training without having to find more training data. The classifier trained in this way was able to achieve an 88\% accuracy score when classifying images of structural damage caused by earthquakes.
19

GAN-Based Approaches for Generating Structured Data in the Medical Domain

Abedi, Masoud, Hempel, Lars, Sadeghi, Sina, Kirsten, Toralf 03 November 2023 (has links)
Modern machine and deep learning methods require large datasets to achieve reliable and robust results. This requirement is often difficult to meet in the medical field, due to data sharing limitations imposed by privacy regulations or the presence of a small number of patients (e.g., rare diseases). To address this data scarcity and to improve the situation, novel generative models such as Generative Adversarial Networks (GANs) have been widely used to generate synthetic data that mimic real data by representing features that reflect health-related information without reference to real patients. In this paper, we consider several GAN models to generate synthetic data used for training binary (malignant/benign) classifiers, and compare their performances in terms of classification accuracy with cases where only real data are considered. We aim to investigate how synthetic data can improve classification accuracy, especially when a small amount of data is available. To this end, we have developed and implemented an evaluation framework where binary classifiers are trained on extended datasets containing both real and synthetic data. The results show improved accuracy for classifiers trained with generated data from more advanced GAN models, even when limited amounts of original data are available.
20

Going Deeper with Images and Natural Language

Ma, Yufeng 29 March 2019 (has links)
One aim in the area of artificial intelligence (AI) is to develop a smart agent with high intelligence that is able to perceive and understand the complex visual environment around us. More ambitiously, it should be able to interact with us about its surroundings in natural languages. Thanks to the progress made in deep learning, we've seen huge breakthroughs towards this goal over the last few years. The developments have been extremely rapid in visual recognition, in which machines now can categorize images into multiple classes, and detect various objects within an image, with an ability that is competitive with or even surpasses that of humans. Meanwhile, we also have witnessed similar strides in natural language processing (NLP). It is quite often for us to see that now computers are able to almost perfectly do text classification, machine translation, etc. However, despite much inspiring progress, most of the achievements made are still within one domain, not handling inter-domain situations. The interaction between the visual and textual areas is still quite limited, although there has been progress in image captioning, visual question answering, etc. In this dissertation, we design models and algorithms that enable us to build in-depth connections between images and natural languages, which help us to better understand their inner structures. In particular, first we study how to make machines generate image descriptions that are indistinguishable from ones expressed by humans, which as a result also achieved better quantitative evaluation performance. Second, we devise a novel algorithm for measuring review congruence, which takes an image and review text as input and quantifies the relevance of each sentence to the image. The whole model is trained without any supervised ground truth labels. Finally, we propose a brand new AI task called Image Aspect Mining, to detect visual aspects in images and identify aspect level rating within the review context. On the theoretical side, this research contributes to multiple research areas in Computer Vision (CV), Natural Language Processing (NLP), interactions between CVandNLP, and Deep Learning. Regarding impact, these techniques will benefit related users such as the visually impaired, customers reading reviews, merchants, and AI researchers in general. / Doctor of Philosophy / One aim in the area of artificial intelligence (AI) is to develop a smart agent with high intelligence that is able to perceive and understand the complex visual environment around us. More ambitiously, it should be able to interact with us about its surroundings in natural languages. Thanks to the progress made in deep learning, we’ve seen huge breakthroughs towards this goal over the last few years. The developments have been extremely rapid in visual recognition, in which machines now can categorize images into multiple classes, and detect various objects within an image, with an ability that is competitive with or even surpasses that of humans. Meanwhile, we also have witnessed similar strides in natural language processing (NLP). It is quite often for us to see that now computers are able to almost perfectly do text classification, machine translation, etc. However, despite much inspiring progress, most of the achievements made are still within one domain, not handling inter-domain situations. The interaction between the visual and textual areas is still quite limited, although there has been progress in image captioning, visual question answering, etc. In this dissertation, we design models and algorithms that enable us to build in-depth connections between images and natural languages, which help us to better understand their inner structures. In particular, first we study how to make machines generate image descriptions that are indistinguishable from ones expressed by humans, which as a result also achieved better quantitative evaluation performance. Second, we devise a novel algorithm for measuring review congruence, which takes an image and review text as input and quantifies the relevance of each sentence to the image. The whole model is trained without any supervised ground truth labels. Finally, we propose a brand new AI task called Image Aspect Mining, to detect visual aspects in images and identify aspect level rating within the review context. On the theoretical side, this research contributes to multiple research areas in Computer Vision (CV), Natural Language Processing (NLP), interactions between CV&NLP, and Deep Learning. Regarding impact, these techniques will benefit related users such as the visually impaired, customers reading reviews, merchants, and AI researchers in general.

Page generated in 0.0332 seconds