• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 2
  • Tagged with
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Resource-constraint And Scalable Data Distribution Management For High Level Architecture

Gupta, Pankaj 01 January 2007 (has links)
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.
2

DESIGN OF AN FPGA-BASED COMPUTING PLATFORM FOR REAL-TIME 3D MEDICAL IMAGING

Li, Jianchun 19 January 2005 (has links)
No description available.
3

Paralelização do algoritmo FDK para reconstrução 3D de imagens tomográficas usando unidades gráficas de processamento e CUDA-C / Parallelization of the FDK algotithm for 3D reconstruction of tomographic images using graphic processing units and CUDA-C

Joel Sánchez Domínguez 12 January 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / A obtenção de imagens usando tomografia computadorizada revolucionou o diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um conjunto de radiografias são processadas por um algoritmo computacional, o mais usado atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de processamento. No presente trabalho é apresentada a paralelização do algoritmo de reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento (GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para executar computação paralela e abordados os conceitos introdutórios associados à tomografia computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA Quadro 2000 (192 núcleos). / The imaging using computed tomography has revolutionized the diagnosis of diseases in medicine and is widely used in different areas of scientific research. As part of the process to obtained three-dimensional tomographic images a set of x-rays are processed by a computer algorithm, the most widely used algorithm is Feldkamp, David and Kress (FDK). The use of parallel processing to speed up calculations on computer algorithms with the different available technologies, showing their usefulness to decrease processing times. In the present paper presents the parallelization of the algorithm for three-dimensional image reconstruction FDK using graphics processing units (GPU) and CUDA-C. GPUs are shown as a viable option to perform parallel computing and addressed the introductory concepts associated with computed tomographic, GPUs, CUDA-C and parallel processing. The parallel version of the FDK algorithm is executed on the GPU and compared to a serial version of the same, showing higher processing speed. Performance tests were made in two GPUs with different capacities, the NVIDIA GeForce 9400GT (16 cores) and NVIDIA GeForce 2000 (192 cores).
4

Paralelização do algoritmo FDK para reconstrução 3D de imagens tomográficas usando unidades gráficas de processamento e CUDA-C / Parallelization of the FDK algotithm for 3D reconstruction of tomographic images using graphic processing units and CUDA-C

Joel Sánchez Domínguez 12 January 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / A obtenção de imagens usando tomografia computadorizada revolucionou o diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um conjunto de radiografias são processadas por um algoritmo computacional, o mais usado atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de processamento. No presente trabalho é apresentada a paralelização do algoritmo de reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento (GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para executar computação paralela e abordados os conceitos introdutórios associados à tomografia computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA Quadro 2000 (192 núcleos). / The imaging using computed tomography has revolutionized the diagnosis of diseases in medicine and is widely used in different areas of scientific research. As part of the process to obtained three-dimensional tomographic images a set of x-rays are processed by a computer algorithm, the most widely used algorithm is Feldkamp, David and Kress (FDK). The use of parallel processing to speed up calculations on computer algorithms with the different available technologies, showing their usefulness to decrease processing times. In the present paper presents the parallelization of the algorithm for three-dimensional image reconstruction FDK using graphics processing units (GPU) and CUDA-C. GPUs are shown as a viable option to perform parallel computing and addressed the introductory concepts associated with computed tomographic, GPUs, CUDA-C and parallel processing. The parallel version of the FDK algorithm is executed on the GPU and compared to a serial version of the same, showing higher processing speed. Performance tests were made in two GPUs with different capacities, the NVIDIA GeForce 9400GT (16 cores) and NVIDIA GeForce 2000 (192 cores).
5

Characterization of Impulse Noise and Hazard Analysis of Impulse Noise Induced Hearing Loss using AHAAH Modeling

Wu, Qing 01 August 2014 (has links)
Millions of people across the world are suffering from noise induced hearing loss (NIHL), especially under working conditions of either continuous Gaussian or non-Gaussian noise that might affect human's hearing function. Impulse noise is a typical non-Gaussian noise exposure in military and industry, and generates severe hearing loss problem. This study mainly focuses on characterization of impulse noise using digital signal analysis method and prediction of the auditory hazard of impulse noise induced hearing loss by the Auditory Hazard Assessment Algorithm for Humans (AHAAH) modeling. A digital noise exposure system has been developed to produce impulse noises with peak sound pressure level (SPL) up to 160 dB. The characterization of impulse noise generated by the system has been investigated and analyzed in both time and frequency domains. Furthermore, the effects of key parameters of impulse noise on auditory risk unit (ARU) are investigated using both simulated and experimental measured impulse noise signals in the AHAAH model. The results showed that the ARUs increased monotonically with the peak pressure (both P+ and P-) increasing. With increasing of the time duration, the ARUs increased first and then decreased, and the peak of ARUs appeared at about t = 0.2 ms (for both t+ and t-). In addition, the auditory hazard of experimental measured impulse noises signals demonstrated a monotonically increasing relationship between ARUs and system voltages.
6

Image Reconstruction Based On Hilbert And Hybrid Filtered Algorithms With Inverse Distance Weight And No Backprojection Weight

Narasimhadhan, A V 08 1900 (has links) (PDF)
Filtered backprojection (FBP) reconstruction algorithms are very popular in the field of X-ray computed tomography (CT) because they give advantages in terms of the numerical accuracy and computational complexity. Ramp filter based fan-beam FBP reconstruction algorithms have the position dependent weight in the backprojection which is responsible for spatially non-uniform distribution of noise and resolution, and artifacts. Many algorithms based on shift variant filtering or spatially-invariant interpolation in the backprojection step have been developed to deal with this issue. However, these algorithms are computationally demanding. Recently, fan-beam algorithms based on Hilbert filtering with inverse distance weight and no weight in the backprojection have been derived using the Hamaker’s relation. These fan-beam reconstruction algorithms have been shown to improve noise uniformity and uniformity in resolution. In this thesis, fan-beam FBP reconstruction algorithms with inverse distance back-projection weight and no backprojection weight for 2D image reconstruction are presented and discussed for the two fan-beam scan geometries -equi-angular and equispace detector array. Based on the proposed and discussed fan-beam reconstruction algorithms with inverse distance backprojection and no backprojection weight, new 3D cone-beam FDK reconstruction algorithms with circular and helical scan trajectories for curved and planar detector geometries are proposed. To start with three rebinning formulae from literature are presented and it is shown that one can derive all fan-beam FBP reconstruction algorithms from these rebinning formulae. Specifically, two fan-beam algorithms with no backprojection weight based on Hilbert filtering for equi-space linear array detector and one new fan-beam algorithm with inverse distance backprojection weight based on hybrid filtering for both equi-angular and equi-space linear array detector are derived. Simulation results for these algorithms in terms of uniformity of noise and resolution in comparison to standard fan-beam FBP reconstruction algorithm (ramp filter based fan-beam reconstruction algorithm) are presented. It is shown through simulation that the fan-beam reconstruction algorithm with inverse distance in the backprojection gives better noise performance while retaining the resolution properities. A comparison between above mentioned reconstruction algorithms is given in terms of computational complexity. The state of the art 3D X-ray imaging systems in medicine with cone-beam (CB) circular and helical computed tomography scanners use non-exact (approximate) FBP based reconstruction algorithm. They are attractive because of their simplicity and low computational cost. However, they produce sub-optimal reconstructed images with respect to cone-beam artifacts, noise and axial intensity drop in case of circular trajectory scan imaging. Axial intensity drop in the reconstructed image is due to the insufficient data acquired by the circular-scan trajectory CB CT. This thesis deals with investigations to improve the image quality by means of the Hilbert and hybrid filtering based algorithms using redundancy data for Feldkamp, Davis and Kress (FDK) type reconstruction algorithms. In this thesis, new FDK type reconstruction algorithms for cylindrical detector and planar detector for CB circular CT are developed, which are obtained by extending to three dimensions (3D) an exact Hilbert filtering based FBP algorithm for 2D fan-beam beam algorithms with no position dependent backprojection weight and fan-beam algorithm with inverse distance backprojection weight. The proposed FDK reconstruction algorithm with inverse distance weight in the backprojection requires full-scan projection data while the FDK reconstruction algorithm with no backprojection weight can handle partial-scan data including very short-scan. The FDK reconstruction algorithms with no backprojection weight for circular CB CT are compared with Hu’s, FDK and T-FDK reconstruction algorithms in-terms of axial intensity drop and computational complexity. The simulation results of noise, CB artifacts performance and execution timing as well as the partial-scan reconstruction abilities are presented. We show that FDK reconstruction algorithms with no backprojection weight have better noise performance characteristics than the conventional FDK reconstruction algorithm where the backprojection weight is known to result in spatial non-uniformity in the noise characteristics. In this thesis, we present an efficient method to reduce the axial intensity drop in circular CB CT. The efficient method consists of two steps: the first one is reconstruction of the object using FDK reconstruction algorithm with no backprojection weight and the second is estimating the missing term. The efficient method is comparable to Zhu et al.’s method in terms of reduction in axial intensity drop, noise and computational complexity. The helical scanning trajectory satisfies the Tuy-smith condition, hence an exact and stable reconstruction is possible. However, the helical FDK reconstruction algorithm is responsible for the cone-beam artifacts since the helical FDK reconstruction algorithm is approximate in its derivation. In this thesis, helical FDK reconstruction algorithms based on Hilbert filtering with no backprojection weight and FDK reconstruction algorithm based on hybrid filtering with inverse distance backprojection weight are presented to reduce the CB artifacts. These algorithms are compared with standard helical FDK in-terms of noise, CB artifacts and computational complexity.
7

Truncated Data Problems In Helical Cone-Beam Tomography

Anoop, K P 06 1900 (has links)
This report delves into two of the major truncated data problems in helical cone-beam tomography: Axial truncation and Lateral truncation. The problem of axial truncation, also classically known as the Long Object problem, was a major challenge in the development of helical scan tomography. Generalization of the Feldkamp method (FDK) for circular scan to the helical scan trajectory was known to give reasonable solutions to the problem. The FDK methods are approximate in nature and hence provide only approximate solution to the long object problem. Recently, many methods which provide exact solution to this problem have been developed the major breakthrough being the Katsevich’s algorithm which is exact, efficient and also requires lesser detector area compared to Feldkamp methods. The first part of the report deals with the implementation strategies for methods capable of handling axial truncation. Here, we specifically look at the exact and efficient Katsevich’s solution to long object problem and the class of approximate solutions provided by the generalized FDK formulae. The later half of the report looks at the lateral truncation problem and suggests new methods to handle such truncation in helical scan CT. Simulation results for reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view (FOV). A row-by-row data completion approach using Linear Prediction is introduced for helical scan truncated data. An extension/improvement of this technique known as Windowed Linear Prediction approach is introduced. Efficacy of both these techniques are shown using simulation with standard phantoms. Various image quality measures for the resulting reconstructed images are used to evaluate the performance of the proposed methods against an already existing technique. Motivated by a study of the autocorrelation and partial autocorrelation functions of the projection data the use of a non-stationary linear model, the ARIMA model, is proposed for data completion. The new model is first validated in the 2D truncated data situation. Also a method of incorporating the parallel beam data consistency condition into this new method is considered. Performance evaluation of the new method with consistency condition shows that it can outperform the existing techniques. Simulation experiments show the efficacy of the ARIMA model for data completion in 2D as well as 3D truncated data scenario. The model is shown to work well for the laterally truncated helical cone-beam case.
8

Performance Evaluation Of Fan-beam And Cone-beam Reconstruction Algorithms With No Backprojection Weight On Truncated Data Problems

Sumith, K 07 1900 (has links) (PDF)
This work focuses on using the linear prediction based projection completion for the fan-beam and cone-beam reconstruction algorithm with no backprojection weight. The truncated data problems are addressed in the computed tomography research. However, the image reconstruction from truncated data perfectly has not been achieved yet and only approximately accurate solutions have been obtained. Thus research in this area continues to strive to obtain close result to the perfect. Linear prediction techniques are adopted for truncation completion in this work, because previous research on the truncated data problems also have shown that this technique works well compared to some other techniques like polynomial fitting and iterative based methods. The Linear prediction technique is a model based technique. The autoregressive (AR) and moving average (MA) are the two important models along with autoregressive moving average (ARMA) model. The AR model is used in this work because of the simplicity it provides in calculating the prediction coefficients. The order of the model is chosen based on the partial autocorrelation function of the projection data proved in the previous researches that have been carried out in this area of interest. The truncated projection completion using linear prediction and windowed linear prediction show that reasonably accurate reconstruction is achieved. The windowed linear prediction provide better estimate of the missing data, the reason for this is mentioned in the literature and is restated for the reader’s convenience in this work. The advantages associated with the fan-beam reconstruction algorithms with no backprojection weights compared to the fan-beam reconstruction algorithm with backprojection weights motivated us to use the fan-beam reconstruction algorithm with no backprojection weight for reconstructing the truncation completed projection data. The results obtained are compared with the previous work which used conventional fan-beam reconstruction algorithms with backprojection weight. The intensity plots and the noise performance results show improvements resulting from using the fan-beam reconstruction algorithm with no backprojection weight. The work is also extended to the Feldkamp, Davis, and Kress (FDK) reconstruction algorithm with no backprojection weight for the helical scanning geometry and the results obtained are compared with the FDK reconstruction algorithm with backprojection weight for the helical scanning geometry.

Page generated in 0.0247 seconds