• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 40
  • 40
  • 12
  • 12
  • 12
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fast Factorized Back-Pro jection in an FPGA

Hast, Andreas, Johansson, Lars January 2006 (has links)
<p>The Fast Factorized Back Pro jection (FFBP) algorithm is a computationally efficient </p><p>algorithm for image formation in a Very High Frequency Synthetic Aperture Radar (VHF </p><p>SAR) system. In this report an investigation of the feasibility of using an FPGA with a </p><p>hard CPU core to calculate the FFBP in real-time has been done. Two System on a Chip </p><p>designs for this task have been proposed for calculating the FFBP. A simplified version of </p><p>the FFBP has also been implemented in Matlab and used during this pro ject. The result </p><p>is that the computationally intensive parts, such as index generating and interpolation </p><p>calculations, should be implemented in the logic part of the FPGA and the CPU should </p><p>handle scheduling. This kind of modular system is easy to maintain and upgrade.</p>
2

Fast Factorized Back-Pro jection in an FPGA

Hast, Andreas, Johansson, Lars January 2006 (has links)
The Fast Factorized Back Pro jection (FFBP) algorithm is a computationally efficient algorithm for image formation in a Very High Frequency Synthetic Aperture Radar (VHF SAR) system. In this report an investigation of the feasibility of using an FPGA with a hard CPU core to calculate the FFBP in real-time has been done. Two System on a Chip designs for this task have been proposed for calculating the FFBP. A simplified version of the FFBP has also been implemented in Matlab and used during this pro ject. The result is that the computationally intensive parts, such as index generating and interpolation calculations, should be implemented in the logic part of the FPGA and the CPU should handle scheduling. This kind of modular system is easy to maintain and upgrade.
3

Measurement of gas bubbles in a vertical water column using optical tomography

Ibrahim, Sallehuddin January 2000 (has links)
This thesis presents an investigation into the application of optical fibre sensors to a tomographic imaging system for use with gas/water mixtures. Several sensing techniques for measurement of two component flow using non-intrusive techniques are discussed and their relevance to tomographic applications considered. Optical systems are shown to be worthy of investigation. The interaction between a collimated beam of light and a spherical bubble is described. Modelling of different arrangements of projections of optical sensing arrays is carried out to predict the expected sensor output voltage profiles due to different flow regimes represented by four models. The four flow models investigated are: a single pixel flow, two pixels flow, half flow and full flow models. The response of the sensors is based on three models: optical path length, optical attenuation and a combination of optical attenuation model and signal conditioning. In the optical path length model, opaque solids or small bubbles, which are conveyed, may totally or partially interrupt the optical beams within the sensing volume. In the optical attenuation model, the Lambert-Beer's Law is applied to model optical attenuation due to the different optical densities of the fluids being conveyed. The combination of optical attenuation model and signal conditioning is designed to improve the visual contrast of the tomograms compared with those based on the optical attenuation model. Layergram back-projection (LYGBP) is used to reconstruct the image. A hybrid reconstruction algorithm combining knowledge of sensors reading zero flow with LYGBP is tested and shown to improve the image reconstruction. The combination of a two orthogonal and two rectilinear projections system based on optical fibres is used to obtain the concentration profiles and velocity of gas bubbles in a vertical column. The optical fibre lens is modelled to determine the relationships between fibre parameters and collimation of light into the receiver circuit. Modelling of the flow pipe is also carried out to investigate which method of mounting the fibres minimises refraction of the collimated light entering the pipe and the measurement cross-section. The preparation of the ends of the optical fibre and design of the electronics, which process the tomographic data, are described. Concentration profiles obtained from experiments on small bubbles and large bubbles flowing in a hydraulic conveyor are presented. Concentration profiles are generated using the hybrid reconstruction algorithm. The optical tomographic system is shown to be sensitive to small bubbles in water of diameter 1-10 mm and volumetric flow rates up to 1 1/min, and large bubbles in water of diameter 15-20 mm and volumetric flow rates up to 3 1/min. Velocity measurements are obtained directly from cross correlation of upstream and downstream sensors' signals as well as from upstream and downstream pixel concentration values. Suggestions for further work on optical tomographic measurements are made.
4

A Technique for Magnetron Oscillator Based Inverse Synthetic Aperture Radar Image Formation

Aljohani, Mansour Abdullah M. January 2019 (has links)
No description available.
5

A framework for flexible comparison and optimization of X-ray digital tomosynthesis

Smith, Frank A 01 May 2019 (has links)
Digital tomosynthesis is a novel three-dimensional imaging technology that utilizes limited number of X-ray projection images to improve the diagnosis and detection of lesions. In recent years, tomosynthesis has been used in a variety of clinical applications such as dental imaging, angiography, chest imaging, bone imaging, and breast imaging. The goal of our research is to develop a framework to enable flexible optimization and comparison of image reconstruction and imaging configurations.
6

Comparison of filtered back projection and Osem in reducing bladder artifacts in pelvic spect imaging

Katua, Agatha Mary 08 July 2011 (has links)
Bladder artifacts during bone single photon emission computed tomography (SPECT) is a common source of errors. The extent and severity of bladder artifacts have been described for filtered back projection (FBP) reconstruction. OSEM may help to address this poor record of bladder artifacts, which render up to 20% of the images unreadable. Aims and objectives To evaluate the relationship of the bladder to acetabulum ratio in guiding the choice of the number of iterations and subsets used for OSEM reconstruction, for reducing bladder artifacts found on FBP reconstruction. Materials and Methods 105 patients with various indications for bone scans were selected and planar and SPECT images were acquired. The SPECT images were reconstructed with both filtered back projection and OSEM using four different combinations of iterations and subsets. The images were given to three well experienced Nuclear Physicians who were blinded to the diagnosis and type of reconstruction used. They then labelled images from the best to the worst after which the data was analysed. The bladder to acetabulum ratio for each image was determined which was then correlated with the different iterations and subsets used. Results The study demonstrated that reconstruction using OSEM led to better lesion detectability compared to filtered back projection in 87.62% of cases. It further demonstrated that the iterations and subsets used for reconstruction of an image correlates to the bladder to acetabulum ratio. Four iterations and 8 subsets yielded the best results in 48.5% of the images whilst two iterations and 8 subsets yielded the best results in 33.8%. The number of reconstructed images which yielded the best results with 2 iterations and 8 subsets were the same as or more than those with 4 iterations and 8 subsets when the bladder/acetabulum ratio was between 0.2-0.39. A ratio below 0.2 or above 0.39 supports the usage of 4 iterations and 8 subsets over 2 iterations and 8 subsets. Conclusion Bladder to acetabulum ratio can be used to select the optimum number of iterations and subsets for reconstruction of bone SPECT for accurate characterization of lesions. This study also confirms that reconstruction with OSEM (vs FBP) leads to better lesion detectability and characterisation. / Dissertation (MSc)--University of Pretoria, 2011. / Nuclear Medicine / unrestricted
7

Near-field microwave imaging with coherent and interferometric reconstruction methods

Zhou, Qiping January 2020 (has links)
No description available.
8

Iterative synthetic aperture radar imaging algorithms

Kelly, Shaun Innes January 2014 (has links)
Synthetic aperture radar is an important tool in a wide range of civilian and military imaging applications. This is primarily due to its ability to image in all weather conditions, during both the day and the night, unlike optical imaging systems. A synthetic aperture radar system contains a step which is not present in an optical imaging system, this is image formation. This is required because the acquired data from the radar sensor does not directly correspond to the image. Instead, to form an image, the system must solve an inverse problem. In conventional scenarios, this inverse problem is relatively straight forward and a matched lter based algorithm produces an image of suitable image quality. However, there are a number of interesting scenarios where this is not the case. Scenarios where standard image formation algorithms are unsuitable include systems with data undersampling, errors in the system observation model and data that is corrupted by radio frequency interference. Image formation in these scenarios will form the topics of this thesis and a number of iterative algorithms are proposed to achieve image formation. The motivation for these proposed algorithms is primarily from the eld of compressed sensing, which considers the recovery of signals with a low-dimensional structure. The rst contribution of this thesis is the development of fast algorithms for the system observation model and its adjoint. These algorithms are required by large-scale gradient based iterative algorithms for image formation. The proposed algorithms are based on existing fast back-projection algorithms, however, a new decimation strategy is proposed which is more suitable for some applications. The second contribution is the development of a framework for iterative near- eld image formation, which uses the proposed fast algorithms. It is shown that the framework can be used, in some scenarios, to improve the visual quality of images formed from fully sampled data and undersampled data, when compared to images formed using matched lter based algorithms. The third contribution concerns errors in the system observation model. Algorithms that correct these errors are commonly referred to as autofocus algorithms. It is shown that conventional autofocus algorithms, which work as a post-processor on the formed image, are unsuitable for undersampled data. Instead an autofocus algorithm is proposed which corrects errors within the iterative image formation procedure. The proposed algorithm is provably stable and convergent with a faster convergence rate than previous approaches. The nal contribution is an algorithm for ultra-wideband synthetic aperture radar image formation. Due to the large spectrum over which the ultra-wideband signal is transmitted, there is likely to be many other users operating within the same spectrum. These users can produce signi cant radio frequency interference which will corrupt the received data. The proposed algorithm uses knowledge of the RFI spectrum to minimise the e ect of the RFI on the formed image.
9

Improving beamforming-based methodologies for seismological analysis

Tan, Fengzhou 10 April 2019 (has links)
We improved two beamforming-based methodologies for seismological analysis. The first one is a new Three-Dimensional Phase-Weighted Relative Back Projection (3-D PWBP) method to improve the spatial resolution of Back Projection results. We exploit both phase and amplitude of the seismogram signal to enhance the distinction of correlated signals. Also, we implement a 3-D velocity model to provide more accurate travel times. We vindicate these refinements with several synthetic tests and an analysis of the 1997 Mw 7.2 Zirkuh (Iran) earthquake, which we show ruptured mainly unilaterally southwards at a rupture speed of ∼3.0 km/s along its ∼125 km- long, mostly single-stranded surface rupture. Then, we apply the new method to the more complex case of the 2016 Mw 7.8 Kaikoura (New Zealand) earthquake, which we demonstrate is divided into two major stages separated by a gap of ∼8 s and ∼30–40 km. The overall rupture speed is ∼1.7 km/s and the overall duration is ∼84 s, considerably shorter than some earlier estimates. We see no clear evidence for continuous failure of the subduction interface that underlies the known, surface-rupturing crustal faults, though we cannot rule out its involvement in the second major stage in the northern part of the rupture area. The late (∼80 s) peak in relative energy is likely a high-frequency stopping phase, and the rupture appears to terminate southwest of the offshore Needles fault. The second methodology is a novel workflow for earthquake detection and location, named Seismicity-Scanning based on Navigated Automatic Phase-picking (S-SNAP). By taking a cocktail approach that combines Source-Scanning, Kurtosis-based Phase-picking and the Maximum Intersection location technique into a single integrated workflow, this new method is capable of delineating complex spatiotemporal distributions of seismicity. It is automatic, efficiently providing earthquake locations with high comprehensiveness and accuracy. We apply S-SNAP to a dataset recorded by a dense local seismic array during a hydraulic fracturing operation to test this novel approach and to demonstrate the effectiveness of S-SNAP in comparison to existing methods. Overall, S-SNAP found nearly four times as many high-quality events as a template-matching based catalogue. All events in the previous catalogue are identi- fied with similar epicenter, depth and magnitude, while no false detections are found by visual inspection. / Graduate
10

Earthquake Characteristics as Imaged by the Back-Projection Method

Kiser, Eric January 2012 (has links)
This dissertation explores the capability of dense seismic array data for imaging the rupture properties of earthquake sources using a method known as back-projection. Only within the past 10 or 15 years has implementation of the method become feasible through the development of large aperture seismic arrays such as the High Sensitivity Seismograph Network in Japan and the Transportable Array in the United States. Coincidentally, this buildup in data coverage has also been accompanied by a global cluster of giant earthquakes (Mw>8.0). Much of the material in this thesis is devoted to imaging the source complexity of these large events. In particular, evidence for rupture segmentation, dynamic triggering, and frequency dependent energy release is presented. These observations have substantial implications for evaluating the seismic and tsunami hazards of future large earthquakes. In many cases, the details of the large ruptures can only be imaged by the back-projection method through the addition of different data sets and incorporating additional processing steps that enhance low-amplitude signals. These improvements to resolution can also be utilized to study much smaller events. This approach is taken for studying two very different types of earthquakes. First, a global study of the enigmatic intermediate-depth (100-300 km) earthquakes is performed. The results show that these events commonly have sub-horizontal rupture planes and suggest dynamic triggering of multiple sub-events. From these observations, a hypothesis for the generation of intermediate-depth events is proposed. Second, the early aftershock sequences of the 2004 Mw 9.1 Sumatra-Andaman and 2011 Mw 9.0 Tohoku, Japan earthquakes are studied using the back-projection method. These analyses show that many events can be detected that are not in any local or global earthquake catalogues. In particular, the locations of aftershocks in the back-projection results of the 2011 Tohoku sequence fill in gaps in the aftershock distribution of the Japan Meteorological Agency catalogue. These results may change inferences of the behavior of the 2011 mainshock, as well as the nature of future seismicity in this region. In addition, the rupture areas of the largest aftershocks can be determined, and compared to the rupture area of the mainshock. For the Tohoku event, this comparison reveals that the aftershocks contribute significantly to the cumulative failure area of the subduction interface. This result implies that future megathrust events in this region can have larger magnitudes than the 2011 event. / Earth and Planetary Sciences

Page generated in 0.0937 seconds