• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optomechanical System Development of the AWARE Gigapixel Scale Camera

Son, Hui January 2013 (has links)
<p>Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems.</p><p>The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology.</p><p>This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.</p> / Dissertation
2

Large-Scale Multi-Resolution Representations for Accurate Interactive Image and Volume Operations

Sicat, Ronell Barrera 25 November 2015 (has links)
The resolutions of acquired image and volume data are ever increasing. However, the resolutions of commodity display devices remain limited. This leads to an increasing gap between data and display resolutions. To bridge this gap, the standard approach is to employ output-sensitive operations on multi-resolution data representations. Output-sensitive operations facilitate interactive applications since their required computations are proportional only to the size of the data that is visible, i.e., the output, and not the full size of the input. Multi-resolution representations, such as image mipmaps, and volume octrees, are crucial in providing these operations direct access to any subset of the data at any resolution corresponding to the output. Despite its widespread use, this standard approach has some shortcomings in three important application areas, namely non-linear image operations, multi-resolution volume rendering, and large-scale image exploration. This dissertation presents new multi-resolution representations for large-scale images and volumes that address these shortcomings. Standard multi-resolution representations require low-pass pre-filtering for anti- aliasing. However, linear pre-filters do not commute with non-linear operations. This becomes problematic when applying non-linear operations directly to any coarse resolution levels in standard representations. Particularly, this leads to inaccurate output when applying non-linear image operations, e.g., color mapping and detail-aware filters, to multi-resolution images. Similarly, in multi-resolution volume rendering, this leads to inconsistency artifacts which manifest as erroneous differences in rendering outputs across resolution levels. To address these issues, we introduce the sparse pdf maps and sparse pdf volumes representations for large-scale images and volumes, respectively. These representations sparsely encode continuous probability density functions (pdfs) of multi-resolution pixel and voxel footprints in input images and volumes. We show that the continuous pdfs encoded in the sparse pdf map representation enable accurate multi-resolution non-linear image operations on gigapixel images. Similarly, we show that sparse pdf volumes enable more consistent multi-resolution volume rendering compared to standard approaches, on both artificial and real world large-scale volumes. The supplementary videos demonstrate our results. In the standard approach, users heavily rely on panning and zooming interactions to navigate the data within the limits of their display devices. However, panning across the whole spatial domain and zooming across all resolution levels of large-scale images to search for interesting regions is not practical. Assisted exploration techniques allow users to quickly narrow down millions to billions of possible regions to a more manageable number for further inspection. However, existing approaches are not fully user-driven because they typically already prescribe what being of interest means. To address this, we introduce the patch sets representation for large-scale images. Patches inside a patch set are grouped and encoded according to similarity via a permutohedral lattice (p-lattice) in a user-defined feature space. Fast set operations on p-lattices facilitate patch set queries that enable users to describe what is interesting. In addition, we introduce an exploration framework—GigaPatchExplorer—for patch set-based image exploration. We show that patch sets in our framework are useful for a variety of user-driven exploration tasks in gigapixel images and whole collections thereof.
3

Computational Optical Imaging Systems for Spectroscopy and Wide Field-of-View Gigapixel Photography

Kittle, David S. January 2013 (has links)
<p>This dissertation explores computational optical imaging methods to circumvent the physical limitations of classical sensing. An ideal imaging system would maximize resolution in time, spectral bandwidth, three-dimensional object space, and polarization. Practically, increasing any one parameter will correspondingly decrease the others.</p><p>Spectrometers strive to measure the power spectral density of the object scene. Traditional pushbroom spectral imagers acquire high resolution spectral and spatial resolution at the expense of acquisition time. Multiplexed spectral imagers acquire spectral and spatial information at each instant of time. Using a coded aperture and dispersive element, the coded aperture snapshot spectral imagers (CASSI) here described leverage correlations between voxels in the spatial-spectral data cube to compressively sample the power spectral density with minimal loss in spatial-spectral resolution while maintaining high temporal resolution.</p><p>Photography is limited by similar physical constraints. Low f/# systems are required for high spatial resolution to circumvent diffraction limits and allow for more photon transfer to the film plain, but require larger optical volumes and more optical elements. Wide field systems similarly suffer from increasing complexity and optical volume. Incorporating a multi-scale optical system, the f/#, resolving power, optical volume and wide field of view become much less coupled. This system uses a single objective lens that images onto a curved spherical focal plane which is relayed by small micro-optics to discrete focal planes. Using this design methodology allows for gigapixel designs at low f/# that are only a few pounds and smaller than a one-foot hemisphere.</p><p>Computational imaging systems add the necessary step of forward modeling and calibration. Since the mapping from object space to image space is no longer directly readable, post-processing is required to display the required data. The CASSI system uses an undersampled measurement matrix that requires inversion while the multi-scale camera requires image stitching and compositing methods for billions of pixels in the image. Calibration methods and a testbed are demonstrated that were developed specifically for these computational imaging systems.</p> / Dissertation

Page generated in 0.0352 seconds