• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Accelerating SRD Simulation on GPU

Chen, Zhilu 17 April 2013 (has links)
Stochastic Rotation Dynamics (SRD) is a particle-based simulation method that can be used to model complex fluids either in two or three dimensions, which is very useful in biology and physics study. Although SRD is computationally efficient compared to other simulations, it still takes a long time to run the simulation when the size of the model is large, e.g. when using a large array of particles to simulate dense polymers. In some cases, the simulation could take months before getting the results. Thus, this research focuses on the acceleration of the SRD simulation by using GPU. GPU acceleration can reduce the simulation time by orders of magnitude. It is also cost-effective because a GPU costs significantly less than a computer cluster. Compute Unified Device Architecture (CUDA) programming makes it possible to parallelize the program to run on hundreds or thousands of thread processors on GPU. The program is divided into many concurrent threads. In addition, several kernel functions are used for data synchronization. The speedup of GPU acceleration is varied for different parameters of the simulation program, such as size of the model, density of the particles, formation of polymers, and above all the complexity of the algorithm itself. Compared to the CPU version, it is about 10 times speedup for the particle simulation and up to 50 times speedup for polymers. Further performance improvement can be achieved by using multiple GPUs and code optimization.
2

Machine Vision and Autonomous Integration Into an Unmanned Aircraft System

Alexander, Josh, Blake, Sam, Clasby, Brendan, Shah, Anshul Jatin, Van Horne, Chris, Van Horne, Justin 10 1900 (has links)
The University of Arizona's Aerial Robotics Club (ARC) sponsored two senior design teams to compete in the 2011 AUVSI Student Unmanned Aerial Systems (SUAS) competition. These teams successfully design and built a UAV platform in-house that was capable of autonomous flight, capturing aerial imagery, and filtering for target recognition but required excessive computational hardware and software bugs that limited the systems capability. A new multi-discipline team of undergrads was recruited to completely redesign and optimize the system in an attempt to reach true autonomous real-time target recognition with reasonable COTS hardware.
3

GPU Based Scattered Data Modeling

Vinjarapu, Saranya S. 16 May 2012 (has links)
No description available.
4

Hyperspectral Image Analysis Algorithm for Characterizing Human Tissue

Wondim, Yonas kassaw January 2011 (has links)
AbstractIn the field of Biomedical Optics measurement of tissue optical properties, like absorption, scattering, and reduced scattering coefficient, has gained importance for therapeutic and diagnostic applications. Accuracy in determining the optical properties is of vital importance to quantitatively determine chromophores in tissue.There are different techniques used to quantify tissue chromophores. Reflectance spectroscopy is one of the most common methods to rapidly and accurately characterize the blood amount and oxygen saturation in the microcirculation. With a hyper spectral imaging (HSI) device it is possible to capture images with spectral information that depends both on tissue absorption and scattering. To analyze this data software that accounts for both absorption and scattering event needs to be developed.In this thesis work an HSI algorithm, capable of assessing tissue oxygenation while accounting for both tissue absorption and scattering, is developed. The complete imaging system comprises: a light source, a liquid crystal tunable filter (LCTF), a camera lens, a CCD camera, control units and power supply for light source and filter, and a computer.This work also presents a Graphic processing Unit (GPU) implementation of the developed HSI algorithm, which is found computationally demanding. It is found that the GPU implementation outperforms the Matlab “lsqnonneg” function by the order of 5-7X.At the end, the HSI system and the developed algorithm is evaluated in two experiments. In the first experiment the concentration of chromophores is assessed while occluding the finger tip. In the second experiment the skin is provoked by UV light while checking for Erythema development by analyzing the oxyhemoglobin image at different point of time. In this experiment the melanin concentration change is also checked at different point of time from exposure.It is found that the result matches the theory in the time dependent change of oxyhemoglobin and deoxyhemoglobin. However, the result of melanin does not correspond to the theoretically expected result.
5

Modeling and Analysis of Large-Scale On-Chip Interconnects

Feng, Zhuo 2009 December 1900 (has links)
As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted by the increasingly high dimensional process-voltage-temperature (PVT) variations demand much more modeling and analysis efforts than ever before, while the analysis of large scale on-chip interconnects that requires solving tens of millions of unknowns imposes great challenges in computer aided design areas. This dissertation presents new methodologies for addressing the above two important challenging issues for large scale on-chip interconnect modeling and analysis: In the past, the standard statistical circuit modeling techniques usually employ principal component analysis (PCA) and its variants to reduce the parameter dimensionality. Although widely adopted, these techniques can be very limited since parameter dimension reduction is achieved by merely considering the statistical distributions of the controlling parameters but neglecting the important correspondence between these parameters and the circuit performances (responses) under modeling. This dissertation presents a variety of performance-oriented parameter dimension reduction methods that can lead to more than one order of magnitude parameter reduction for a variety of VLSI circuit modeling and analysis problems. The sheer size of present day power/ground distribution networks makes their analysis and verification tasks extremely runtime and memory inefficient, and at the same time, limits the extent to which these networks can be optimized. Given today?s commodity graphics processing units (GPUs) that can deliver more than 500 GFlops (Flops: floating point operations per second). computing power and 100GB/s memory bandwidth, which are more than 10X greater than offered by modern day general-purpose quad-core microprocessors, it is very desirable to convert the impressive GPU computing power to usable design automation tools for VLSI verification. In this dissertation, for the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT) based graphics processing unit (GPU) platforms to tackle power grid analysis with very promising performance. Our GPU based network analyzer is capable of solving tens of millions of power grid nodes in just a few seconds. Additionally, with the above GPU based simulation framework, more challenging three-dimensional full-chip thermal analysis can be solved in a much more efficient way than ever before.

Page generated in 0.1034 seconds