• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 2
  • 1
  • Tagged with
  • 28
  • 28
  • 20
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

BlobGAN-3D: A Spatially-Disentangled 3D-Aware Generative Model for Indoor Scenes

Wang, Qian 03 1900 (has links)
3D-aware image synthesis has attracted increasing interest as it models the 3D nature of our real world. However, performing realistic object-level editing of the generated images in the multi-object scenario still remains a challenge. Recently, a 2D GAN termed BlobGAN has demonstrated great multi-object editing capabilities on real-world indoor scene datasets. In this work, we propose BlobGAN-3D, which is a 3D-aware improvement of the original 2D BlobGAN. We enable explicit camera pose control while maintaining the disentanglement for individual objects in the scene by extending the 2D blobs into 3D blobs. We keep the object-level editing capabilities of BlobGAN and in addition allow flexible control over the 3D location of the objects in the scene. We test our method on real-world indoor datasets and show that our method can achieve comparable image quality compared to the 2D BlobGAN and other 3D-aware GAN baselines while being the first to enable camera pose control and object-level editing in the challenging multi-object real-world scenarios.
2

Inference-based Geometric Modeling for the Generation of Complex Cluttered Virtual Environments

Biggers, Keith Edward 2011 May 1900 (has links)
As the use of simulation increases across many diff erent application domains, the need for high- fidelity three-dimensional virtual representations of real-world environments has never been greater. This need has driven the research and development of both faster and easier methodologies for creating such representations. In this research, we present two diff erent inference-based geometric modeling techniques that support the automatic construction of complex cluttered environments. The fi rst method we present is a surface reconstruction-based approach that is capable of reconstructing solid models from a point cloud capture of a cluttered environment. Our algorithm is capable of identifying objects of interest amongst a cluttered scene, and reconstructing complete representations of these objects even in the presence of occluded surfaces. This approach incorporates a predictive modeling framework that uses a set of user provided models for prior knowledge, and applies this knowledge to the iterative identifi cation and construction process. Our approach uses a local to global construction process guided by rules for fi tting high quality surface patches obtained from these prior models. We demonstrate the application of this algorithm on several synthetic and real-world datasets containing heavy clutter and occlusion. The second method we present is a generative modeling-based approach that can construct a wide variety of diverse models based on user provided templates. This technique leverages an inference-based construction algorithm for developing solid models from these template objects. This algorithm samples and extracts surface patches from the input models, and develops a Petri net structure that is used by our algorithm for properly fitting these patches in a consistent fashion. Our approach uses this generated structure, along with a defi ned parameterization (either user-defi ned through a simple sketch-based interface or algorithmically de fined through various methods), to automatically construct objects of varying sizes and con figurations. These variations can include arbitrary articulation, and repetition and interchanging of parts sampled from the input models. Finally, we affim our motivation by showing an application of these two approaches. We demonstrate how the constructed environments can be easily used within a physically-based simulation, capable of supporting many diff erent application domains.
3

Image Embedding into Generative Adversarial Networks

Abdal, Rameen 14 April 2020 (has links)
We propose an e cient algorithm to embed a given image into the latent space of StyleGAN. This embedding enables semantic image editing operations that can be applied to existing photographs. Taking the StyleGAN trained on the FFHQ dataset as an example, we show results for image morphing, style transfer, and expression transfer. Studying the results of the embedding algorithm provides valuable insights into the structure of the StyleGAN latent space. We propose a set of experiments to test what class of images can be embedded, how they are embedded, what latent space is suitable for embedding, and if the embedding is semantically meaningful.
4

Generative models meet similarity search: efficient, heuristic-free and robust retrieval

Doan, Khoa Dang 23 September 2021 (has links)
The rapid growth of digital data, especially visual and textual contents, brings many challenges to the problem of finding similar data. Exact similarity search, which aims to exhaustively find all relevant items through a linear scan in a dataset, is impractical due to its high computational complexity. Approximate-nearest-neighbor (ANN) search methods, especially the Learning-to-hash or Hashing methods, provide principled approaches that balance the trade-offs between the quality of the guesses and the computational cost for web-scale databases. In this era of data explosion, it is crucial for the hashing methods to be both computationally efficient and robust to various scenarios such as when the application has noisy data or data that slightly changes over time (i.e., out-of-distribution). This Thesis focuses on the development of practical generative learning-to-hash methods and explainable retrieval models. We first identify and discuss the various aspects where the framework of generative modeling can be used to improve the model designs and generalization of the hashing methods. Then we show that these generative hashing methods similarly enjoy several appealing empirical and theoretical properties of generative modeling. Specifically, the proposed generative hashing models generalize better with important properties such as low-sample requirement, and out-of-distribution and data-corruption robustness. Finally, in domains with structured data such as graphs, we show that the computational methods in generative modeling have an interesting utility beyond estimating the data distribution and describe a retrieval framework that can explain its decision by borrowing the algorithmic ideas developed in these methods. Two subsets of generative hashing methods and a subset of explainable retrieval methods are proposed. For the first hashing subset, we propose a novel adversarial framework that can be easily adapted to a new problem domain and three training algorithms that learn the hash functions without several hyperparameters commonly found in the previous hashing methods. The contributions of our work include: (1) Propose novel algorithms, which are based on adversarial learning, to learn the hash functions; (2) Design computationally efficient Wasserstein-related adversarial approaches which have low computational and sample efficiency; (3) Conduct extensive experiments on several benchmark datasets in various domains, including computational advertising, and text and image retrieval, for performance evaluation. For the second hashing subset, we propose energy-based hashing solutions which can improve the generalization and robustness of existing hashing approaches. The contributions of our work for this task include: (1) Propose data-synthesis solutions to improve the generalization of existing hashing methods; (2) Propose energy-based hashing solutions which exhibit better robustness against out-of-distribution and corrupted data; (3) Conduct extensive experiments for performance evaluations on several benchmark datasets in the image retrieval domain. Finally, for the last subset of explainable retrieval methods, we propose an optimal alignment algorithm that achieves a better similarity approximation for a pair of structured objects, such as graphs, while capturing the alignment between the nodes of the graphs to explain the similarity calculation. The contributions of our work for this task include: (1) Propose a novel optimal alignment algorithm for comparing two sets of bag-of-vectors embeddings; (2) Propose a differentiable computation to learn the parameters of the proposed optimal alignment model; (3) Conduct extensive experiments, for performance evaluation of both the similarity approximation task and the retrieval task, on several benchmark graph datasets. / Doctor of Philosophy / Searching for similar items, or similarity search, is one of the fundamental tasks in this information age, especially when there is a rapid growth of visual and textual contents. For example, in a search engine such as Google, a user searches for images with similar content to a referenced image; in online advertising, an advertiser finds new users, and eventually targets these users with advertisements, where the new users have similar profiles to some referenced users who have previously responded positively to the same or similar advertisements; in the chemical domain, scientists search for proteins with a similar structure to a referenced protein. The practical search applications in these domains often face several challenges, especially when these datasets or databases can contain a large number (e.g., millions or even billions) of complex-structured items (e.g., texts, images, and graphs). These challenges can be organized into three central themes: search efficiency (the economical use of resources such as computation and time) and model-design effort (the ease of building the search model). Besides search efficiency and model-design effort, it is increasingly a requirement of a search model to possess the ability to explain the search results, especially in the scientific domains where the items are structured objects such as graphs. This dissertation tackles the aforementioned challenges in practical search applications by using the computational techniques that learn to generate data. First, we overcome the need to scan the entire large dataset for similar items by considering an approximate similarity search technique called hashing. Then, we propose an unsupervised hashing framework that learns the hash functions with simpler objective functions directly from raw data. The proposed retrieval framework can be easily adapted into new domains with significantly lower effort in model design. When labeled data is available but is limited (which is a common scenario in practical search applications), we propose a hashing network that can synthesize additional data to improve the hash function learning process. The learned model also exhibits significant robustness against data corruption and slight changes in the underlying data. Finally, in domains with structured data such as graphs, we propose a computation approach that can simultaneously estimate the similarity of structured objects, such as graphs, and capture the alignment between their substructures, e.g., nodes. The alignment mechanism can help explain the reason why two objects are similar or dissimilar. This is a useful tool for domain experts who not only want to search for similar items but also want to understand how the search model makes its predictions.
5

Modelagem gerativa para sumarização automática multidocumento / Generative modeling for multi-document sumarization

Jorge, María Lucía Del Rosario Castro 09 March 2015 (has links)
A Sumarização Multidocumento consiste na produção automática de um único sumário a partir de um conjunto de textos que tratam de um mesmo assunto. Essa tarefa vem se tornando cada vez mais importante, já que auxilia o processamento de grandes volumes de informação, permitindo destacar a informação mais relevante para o usuário. Nesse trabalho, são propostas e exploradas modelagens baseadas em Aprendizado Gerativo, em que a tarefa de Sumarização Multidocumento é esquematizada usando o modelo Noisy- Channel e seus componentes de modelagem de língua, de transformação e decodificação, que são apropriadamente instanciados para a tarefa em questão. Essas modelagens são formuladas com atributos superficiais e profundos. Em particular, foram definidos três modelos de transformação, cujas histórias gerativas capturam padrões de seleção de conteúdo a partir de conjuntos de textos e seus correspondentes sumários multidocumento produzidos por humanos. O primeiro modelo é relativamente mais simples, pois é composto por atributos superficiais tradicionais; o segundo modelo é mais complexo, pois, além de atributos superficiais, adiciona atributos discursivos monodocumento; finalmente, o terceiro modelo é o mais complexo, pois integra atributos superficiais, de natureza discursiva monodocumento e semântico-discursiva multidocumento, pelo uso de informação proveniente das teorias RST e CST, respectivamente. Além desses modelos, também foi desenvolvido um modelo de coerência (ou modelo de língua) para sumários multidocumento, que é projetado para capturar padrões de coerência, tratando alguns dos principais fenômenos multidocumento que a afetam. Esse modelo foi desenvolvido com base no modelo de entidades e com informações discursivas. Cada um desses modelos foi inferido a partir do córpus CSTNews de textos jornalísticos e seus respectivos sumários em português. Finalmente, foi desenvolvido também um decodificador para realizar a construção do sumário a partir das inferências obtidas. O decodificador seleciona o subconjunto de sentenças que maximizam a probabilidade do sumário de acordo com as probabilidades inferidas nos modelos de seleção de conteúdo e o modelo de coerência. Esse decodificador inclui também uma estratégia para evitar que sentenças redundantes sejam incluídas no sumário final. Os sumários produzidos a partir dessa modelagem gerativa são comparados com os sumários produzidos por métodos estatísticos do estado da arte, os quais foram implementados, treinados e testados sobre o córpus. Utilizando-se avaliações de informatividade tradicionais da área, os resultados obtidos mostram que os modelos desenvolvidos neste trabalho são competitivos com os métodos estatísticos do estado da arte e, em alguns casos, os superam. / Multi-document Summarization consists in automatically producing a unique summary from a set of source texts that share a common topic. This task is becoming more important, since it supports large volume data processing, enabling to highlight relevant information to the users. In this work, generative modeling approaches are proposed and investigated, where the Multidocument Summarization task is modeled through the Noisy-Channel framework and its components: language model, transformation model and decoding, which are properly instantiated for the correspondent task. These models are formulated with shallow and deep features. Particularly, three main transformation models were defined, establishing generative stories that capture content selection patterns from sets of source texts and their corresponding human multi-document summaries. The first model is the less complex, since its features are traditional shallow features; the second model is more complex, incorporating single-document discursive knowledge features (given by RST) to the features proposed in the first model; finally, the third model is the most complex, since it incorporates multi-document discursive knowledge features (given by CST) to the features provided by models 1 and 2. Besides these models, it was also developed a coherence model (represented by the Noisy-Channel´s language model) for multi-document summaries. This model, different from transformation models, aims at capturing coerence patterns in multi-document summaries. This model was developed over the Entity-based Model and incorporates discursive knowledge in order to capture coherence patterns, exploring multi-document phenomena. Each of these models was treined with the CSTNews córpus of journalistic texts and their corresponding summaries. Finally, a decoder to search for the summary that maximizes the probability of the estimated models was developed. The decoder selects the subset of sentences that maximize the estimated probabilities. The decoder also includes an additional functionality for treating redundancy in the decoding process by using discursive information from the CST. The produced summaries are compared with the summaries produced by state of the art generative models, which were also treined and tested with the CSTNews corpus. The evaluation was carried out using traditional informativeness measures, and the results showed that the generative models developed in this work are competitive with the state of the art statistical models, and, in some cases, they outperform them. .
6

Modelagem gerativa para sumarização automática multidocumento / Generative modeling for multi-document sumarization

María Lucía Del Rosario Castro Jorge 09 March 2015 (has links)
A Sumarização Multidocumento consiste na produção automática de um único sumário a partir de um conjunto de textos que tratam de um mesmo assunto. Essa tarefa vem se tornando cada vez mais importante, já que auxilia o processamento de grandes volumes de informação, permitindo destacar a informação mais relevante para o usuário. Nesse trabalho, são propostas e exploradas modelagens baseadas em Aprendizado Gerativo, em que a tarefa de Sumarização Multidocumento é esquematizada usando o modelo Noisy- Channel e seus componentes de modelagem de língua, de transformação e decodificação, que são apropriadamente instanciados para a tarefa em questão. Essas modelagens são formuladas com atributos superficiais e profundos. Em particular, foram definidos três modelos de transformação, cujas histórias gerativas capturam padrões de seleção de conteúdo a partir de conjuntos de textos e seus correspondentes sumários multidocumento produzidos por humanos. O primeiro modelo é relativamente mais simples, pois é composto por atributos superficiais tradicionais; o segundo modelo é mais complexo, pois, além de atributos superficiais, adiciona atributos discursivos monodocumento; finalmente, o terceiro modelo é o mais complexo, pois integra atributos superficiais, de natureza discursiva monodocumento e semântico-discursiva multidocumento, pelo uso de informação proveniente das teorias RST e CST, respectivamente. Além desses modelos, também foi desenvolvido um modelo de coerência (ou modelo de língua) para sumários multidocumento, que é projetado para capturar padrões de coerência, tratando alguns dos principais fenômenos multidocumento que a afetam. Esse modelo foi desenvolvido com base no modelo de entidades e com informações discursivas. Cada um desses modelos foi inferido a partir do córpus CSTNews de textos jornalísticos e seus respectivos sumários em português. Finalmente, foi desenvolvido também um decodificador para realizar a construção do sumário a partir das inferências obtidas. O decodificador seleciona o subconjunto de sentenças que maximizam a probabilidade do sumário de acordo com as probabilidades inferidas nos modelos de seleção de conteúdo e o modelo de coerência. Esse decodificador inclui também uma estratégia para evitar que sentenças redundantes sejam incluídas no sumário final. Os sumários produzidos a partir dessa modelagem gerativa são comparados com os sumários produzidos por métodos estatísticos do estado da arte, os quais foram implementados, treinados e testados sobre o córpus. Utilizando-se avaliações de informatividade tradicionais da área, os resultados obtidos mostram que os modelos desenvolvidos neste trabalho são competitivos com os métodos estatísticos do estado da arte e, em alguns casos, os superam. / Multi-document Summarization consists in automatically producing a unique summary from a set of source texts that share a common topic. This task is becoming more important, since it supports large volume data processing, enabling to highlight relevant information to the users. In this work, generative modeling approaches are proposed and investigated, where the Multidocument Summarization task is modeled through the Noisy-Channel framework and its components: language model, transformation model and decoding, which are properly instantiated for the correspondent task. These models are formulated with shallow and deep features. Particularly, three main transformation models were defined, establishing generative stories that capture content selection patterns from sets of source texts and their corresponding human multi-document summaries. The first model is the less complex, since its features are traditional shallow features; the second model is more complex, incorporating single-document discursive knowledge features (given by RST) to the features proposed in the first model; finally, the third model is the most complex, since it incorporates multi-document discursive knowledge features (given by CST) to the features provided by models 1 and 2. Besides these models, it was also developed a coherence model (represented by the Noisy-Channel´s language model) for multi-document summaries. This model, different from transformation models, aims at capturing coerence patterns in multi-document summaries. This model was developed over the Entity-based Model and incorporates discursive knowledge in order to capture coherence patterns, exploring multi-document phenomena. Each of these models was treined with the CSTNews córpus of journalistic texts and their corresponding summaries. Finally, a decoder to search for the summary that maximizes the probability of the estimated models was developed. The decoder selects the subset of sentences that maximize the estimated probabilities. The decoder also includes an additional functionality for treating redundancy in the decoding process by using discursive information from the CST. The produced summaries are compared with the summaries produced by state of the art generative models, which were also treined and tested with the CSTNews corpus. The evaluation was carried out using traditional informativeness measures, and the results showed that the generative models developed in this work are competitive with the state of the art statistical models, and, in some cases, they outperform them. .
7

Methods for Generative Adversarial Output Enhancement

Brodie, Michael B. 09 December 2020 (has links)
Generative Adversarial Networks (GAN) learn to synthesize novel samples for a given data distribution. While GANs can train on diverse data of various modalities, the most successful use cases to date apply GANs to computer vision tasks. Despite significant advances in training algorithms and network architectures, GANs still struggle to consistently generate high-quality outputs after training. We present a series of papers that improve GAN output inference qualitatively and quantitatively. The first chapter, Alpha Model Domination, addresses a related subfield of Multiple Choice Learning, which -- like GANs -- aims to generate diverse sets of outputs. The next chapter, CoachGAN, introduces a real-time refinement method for the latent input space that improves inference quality for pretrained GANs. The following two chapters introduce finetuning methods for arbitrary, end-to-end differentiable GANs. The first, PuzzleGAN, proposes a self-supervised puzzle-solving task to improve global coherence in generated images. The latter, Trained Truncation Trick, improves upon a common inference heuristic by better maintaining output diversity while increasing image realism. Our final work, Two Second StyleGAN Projection, reduces the time for high-quality, image-to-latent GAN projections by two orders of magnitude. We present a wide array of results and applications of our method. We conclude with implications and directions for future work.
8

Methods for Generative Adversarial Output Enhancement

Brodie, Michael B. 09 December 2020 (has links)
Generative Adversarial Networks (GAN) learn to synthesize novel samples for a given data distribution. While GANs can train on diverse data of various modalities, the most successful use cases to date apply GANs to computer vision tasks. Despite significant advances in training algorithms and network architectures, GANs still struggle to consistently generate high-quality outputs after training. We present a series of papers that improve GAN output inference qualitatively and quantitatively. The first chapter, Alpha Model Domination, addresses a related subfield of Multiple Choice Learning, which -- like GANs -- aims to generate diverse sets of outputs. The next chapter, CoachGAN, introduces a real-time refinement method for the latent input space that improves inference quality for pretrained GANs. The following two chapters introduce finetuning methods for arbitrary, end-to-end differentiable GANs. The first, PuzzleGAN, proposes a self-supervised puzzle-solving task to improve global coherence in generated images. The latter, Trained Truncation Trick, improves upon a common inference heuristic by better maintaining output diversity while increasing image realism. Our final work, Two Second StyleGAN Projection, reduces the time for high-quality, image-to-latent GAN projections by two orders of magnitude. We present a wide array of results and applications of our method. We conclude with implications and directions for future work.
9

Generative Modeling as a tool in Urban Riverfront Design; an exploration of Parametric Design in Landscape Architecture

Meier, Daniel Steven 27 June 2012 (has links)
No description available.
10

A Generalized Framework for Representing Complex Networks

Viplove Arora (8086250) 06 December 2019 (has links)
<div>Complex systems are often characterized by a large collection of components interacting in nontrivial ways. Self-organization among these individual components often leads to emergence of a macroscopic structure that is neither completely regular nor completely random. In order to understand what we observe at a macroscopic scale, conceptual, mathematical, and computational tools are required for modeling and analyzing these interactions. A principled approach to understand these complex systems (and the processes that give rise to them) is to formulate generative models and infer their parameters from given data that is typically stored in the form of networks (or graphs). The increasing availability of network data from a wide variety of sources, such as the Internet, online social networks, collaboration networks, biological networks, etc., has fueled the rapid development of network science. </div><div><br></div><div>A variety of generative models have been designed to synthesize networks having specific properties (such as power law degree distributions, small-worldness, etc.), but the structural richness of real-world network data calls for researchers to posit new models that are capable of keeping pace with the empirical observations about the topological properties of real networks. The mechanistic approach to modeling networks aims to identify putative mechanisms that can explain the dependence, diversity, and heterogeneity in the interactions responsible for creating the topology of an observed network. A successful mechanistic model can highlight the principles by which a network is organized and potentially uncover the mechanisms by which it grows and develops. While it is difficult to intuit appropriate mechanisms for network formation, machine learning and evolutionary algorithms can be used to automatically infer appropriate network generation mechanisms from the observed network structure.</div><div><br></div><div>Building on these philosophical foundations and a series of (not new) observations based on first principles, we extrapolate an action-based framework that creates a compact probabilistic model for synthesizing real-world networks. Our action-based perspective assumes that the generative process is composed of two main components: (1) a set of actions that expresses link formation potential using different strategies capturing the collective behavior of nodes, and (2) an algorithmic environment that provides opportunities for nodes to create links. Optimization and machine learning methods are used to learn an appropriate low-dimensional action-based representation for an observed network in the form of a row stochastic matrix, which can subsequently be used for simulating the system at various scales. We also show that in addition to being practically relevant, the proposed model is relatively exchangeable up to relabeling of the node-types. </div><div><br></div><div>Such a model can facilitate handling many of the challenges of understanding real data, including accounting for noise and missing values, and connecting theory with data by providing interpretable results. To demonstrate the practicality of the action-based model, we decided to utilize the model within domain-specific contexts. We used the model as a centralized approach for designing resilient supply chain networks while incorporating appropriate constraints, a rare feature of most network models. Similarly, a new variant of the action-based model was used for understanding the relationship between the structural organization of human brains and the cognitive ability of subjects. Finally, our analysis of the ability of state-of-the-art network models to replicate the expected topological variations in network populations highlighted the need for rethinking the way we evaluate the goodness-of-fit of new and existing network models, thus exposing significant gaps in the literature.</div>

Page generated in 0.1228 seconds