• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 13
  • 12
  • 7
  • 6
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 174
  • 174
  • 65
  • 52
  • 35
  • 26
  • 24
  • 24
  • 23
  • 21
  • 20
  • 18
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Image-based Exploration of Large-Scale Pathline Fields

Nagoor, Omniah H. 27 May 2014 (has links)
While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.
62

Image-based Exploration of Iso-surfaces for Large Multi- Variable Datasets using Parameter Space.

Binyahib, Roba S. 13 May 2013 (has links)
With an increase in processing power, more complex simulations have resulted in larger data size, with higher resolution and more variables. Many techniques have been developed to help the user to visualize and analyze data from such simulations. However, dealing with a large amount of multivariate data is challenging, time- consuming and often requires high-end clusters. Consequently, novel visualization techniques are needed to explore such data. Many users would like to visually explore their data and change certain visual aspects without the need to use special clusters or having to load a large amount of data. This is the idea behind explorable images (EI). Explorable images are a novel approach that provides limited interactive visualization without the need to re-render from the original data [40]. In this work, the concept of EI has been used to create a workflow that deals with explorable iso-surfaces for scalar fields in a multivariate, time-varying dataset. As a pre-processing step, a set of iso-values for each scalar field is inferred and extracted from a user-assisted sampling technique in time-parameter space. These iso-values are then used to generate iso- surfaces that are then pre-rendered (from a fixed viewpoint) along with additional buffers (i.e. normals, depth, values of other fields, etc.) to provide a compressed representation of iso-surfaces in the dataset. We present a tool that at run-time allows the user to interactively browse and calculate a combination of iso-surfaces superimposed on each other. The result is the same as calculating multiple iso- surfaces from the original data but without the memory and processing overhead. Our tool also allows the user to change the (scalar) values superimposed on each of the surfaces, modify their color map, and interactively re-light the surfaces. We demonstrate the effectiveness of our approach over a multi-terabyte combustion dataset. We also illustrate the efficiency and accuracy of our technique by comparing our results with those from a more traditional visualization pipeline.
63

Design, Development, Characterization, and Validation of A Paper-based Microchip Electrophoresis System

Hasan, Muhammad Noman 01 June 2020 (has links)
No description available.
64

Disocclusion Inpainting using Generative Adversarial Networks

Aftab, Nadeem January 2020 (has links)
The old methods used for images inpainting of the Depth Image Based Rendering (DIBR) process are inefficient in producing high-quality virtual views from captured data. From the viewpoint of the original image, the generated data’s structure seems less distorted in the virtual view obtained by translation but when then the virtual view involves rotation, gaps and missing spaces become visible in the DIBR generated data. The typical approaches for filling the disocclusion tend to be slow, inefficient, and inaccurate. In this project, a modern technique Generative Adversarial Network (GAN) is used to fill the disocclusion. GAN consists of two or more neural networks that compete against each other and get trained. This study result shows that GAN can inpaint the disocclusion with a consistency of the structure. Additionally, another method (Filling) is used to enhance the quality of GAN and DIBR images. The statistical evaluation of results shows that GAN and filling method enhance the quality of DIBR images.
65

Deep Learning Approach for Vision Navigation in Flight

McNally, Branden Timothy January 2018 (has links)
No description available.
66

Vision-Based Rendering: Using Computational Stereo to Actualize IBR View Synthesis

Steele, Kevin L. 14 August 2006 (has links) (PDF)
Computer graphics imagery (CGI) has enabled many useful applications in training, defense, and entertainment. One such application, CGI simulation, is a real-time system that allows users to navigate through and interact with a virtual rendition of an existing environment. Creating such systems is difficult, but particularly burdensome is the task of designing and constructing the internal representation of the simulation content. Authoring this content on a computer usually requires great expertise and many man-hours of labor. Computational stereo and image-based rendering offer possibilities to automatically create simulation content without user assistance. However, these technologies have largely been limited to creating content from only a few photographs, severely limiting the simulation experience. The purpose of this dissertation is to enable the process of automated content creation for large numbers of photographs. The workflow goal consists of a user photographing any real-world environment intended for simulation, and then loading the photographs into the computer. The theoretical and algorithmic contributions of the dissertation are then used to transform the photographs into the data required for real-time exploration of the photographed locale. This permits a rich simulation experience without the laborious effort required to author the content manually. To approach this goal we make four contributions to the fields of computer vision and image-based rendering: an improved point correspondence methodology, an adjacency graph construction algorithm for unordered photographs, a pose estimation ordering for unordered image sets, and an image-based rendering algorithm that interpolates omnidirectional images to synthesize novel views. We encapsulate our contributions into a working system that we call Vision-Based Rendering (VBR). With our VBR system we are able to automatically create simulation content from a large unordered collection of input photographs. However, there are severe restrictions in the type of image content our present system can accurately simulate. Photographs containing large regions of high frequency detail are incorporated very accurately, but images with smooth color gradations, including most indoor photographs, create distracting artifacts in the final simulation. Thus our system is a significant and functional step toward the ultimate goal of simulating any real-world environment.
67

Deep and Machine Learning on Imaging Flow Cytometry

Dai, Xinyi January 2022 (has links)
Cell painting uses fluorescent agents to label the compositions or organelles of cells to evoke morphological profiling. Imaging flow cytometry (IFC) is a multi-channel imaging technique to acquire individual cell images, including the brightfield and multiple single fluorescence channels. Thus, it is necessary to assess whether cell painting combined with IFC can provide sufficient phenotypic information to distinguish morphological changes in cells. This thesis investigated changes in morphological characteristics and classification of images under different drug perturbations by employing this novel combination. The analysis procedure of IFC images of U-2 OS cells and leukemia blood cells was the focus of this thesis, which could be broadly divided into two stages. The first stage was the preprocessing of images, exploring a preprocessing framework that uses montage processing of cell images to reshape the specification of individual cell images and involve cell segmentation. The following stage was image analysis, which could be further branched into two approaches. The first approach consisted of quantifying brightfield features using CellProfiler (CP) and performing feature classification using CellProfiler Analyst (CPA). Three machine learning classifiers in CPA were utilized: Random Forest, AdaBoost, and Gradient Boosting. The investigation found that the brightfield intensity, size of the cells, and texture complexity were the most distinguishing features. The second approach employed a convolutional neural network model to conduct image classification from two image resources: the brightfield images and the merged brightfield and fluorescence channel images. This study found that brightfield images alone for phenotypic classification were not sufficient, but the accuracy of classification can be further improved by superimposing fluorescence information into the brightfield images. Nevertheless, the availability of IFC for differentiation of cell phenotypic changes under different drug effects is still proven to be viable. Furthermore, this thesis also discussed some measures to improve the image analysis procedure both regarding the image preprocessing and image analysis stages.
68

Image-based Material Editing

Khan, Erum 01 January 2006 (has links)
Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a set of methods for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image, and an alpha matte specifying the object. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies. Thus, it may be possible to produce a visually compelling illusion of material transformations, without fully reconstructing the lighting or geometry. We employ a range of algorithms depending on the target material. First, an approximate depth map is derived from the image intensities using bilateral filters. The resulting surface normals are then used to map data onto the surface of the object to specify its material appearance. To create transparent or translucent materials, the mapped data are derived from the object's background. To create textured materials, the mapped data are a texture map. The surface normals can also be used to apply arbitrary bidirectional reflectance distribution functions to the surface, allowing us to simulate a wide range of materials. To facilitate the process of material editing, we generate the HDR image with a novel algorithm, that is robust against noise in individual exposures. This ensures that any noise, which would possibly have affected the shape recovery of the objects adversely, will be removed. We also present an algorithm to automatically generate alpha mattes. This algorithm requires as input two images--one where the object is in focus, and one where the background is in focus--and then automatically produces an approximate matte, indicating which pixels belong to the object. The result is then improved by a second algorithm to generate an accurate alpha matte, which can be given as input to our material editing techniques.
69

Site, Sight, Swipe, Prada Marfa: A Case Study in Public Art, Cultural Tourism, and Image-Based Social Media Engagement

Hogan, Ha'ani Joy 01 January 2023 (has links) (PDF)
Through a case study of Prada Marfa, a site-specific public art sculpture located in West Texas, this dissertation examines the connection of public art, cultural tourism, and image-based social media engagement. Little scholarship that combines all three areas of study exists. To fill this gap, this study incorporates five methods of research to determine how one public art sculpture's existence can contribute to its surrounding community by prompting economic activity and influencing the way that community is seen through a public lens. The five methods encompass a historical analysis of news articles about The City of Marfa and Prada Marfa, a content analysis of Instagram posts, observations of interactions at the Prada Marfa site, a survey sent to Instagram users, and interviews with key stakeholders related to the sculpture and the Marfa community. This dissertation finds that the acts of photography and performance work together to show how Prada Marfa's existence generates intrigue for The City of Marfa and can influence tourism. It emphasizes that, regardless of a tourist's motive to visit the sculpture, they were still influenced to travel to the region specifically to see Prada Marfa. This dissertation also finds that the public narrative of Prada Marfa does not fully represent its local community and that the tourism dollars earned through arts engagement do not touch all individuals living in The City of Marfa. This research further reveals how image-based social media engagement, seen through the lens of cultural tourism to visit an Instagram-able site, can contribute to a community's economy and public-facing identity. This research can be used as an advocacy tool for the nonprofit arts community when trying to discuss the benefits of public visual art.
70

Disocclusion Mitigation for Point Cloud Image-Based Level-of-Detail Imposters

Mourning, Chad L. January 2015 (has links)
No description available.

Page generated in 0.0542 seconds