• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 2
  • Tagged with
  • 62
  • 62
  • 30
  • 25
  • 20
  • 19
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Broadband IR stokes polarimetry for the electro-optic characterization of cadmium zinc telluride

FitzGerald, William 21 December 2017 (has links)
The infrared portion of the electro-magnetic spectrum is a challenging region in which to perform optical techniques, limited by both device efficiency and availability. In this dissertation, a new optical technique is introduced to facilitate polarization state measurement across the mid-IR. In addition, cadmium zinc telluride (CZT) is investigated as a potential new material suitable for electro-optic devices which function in the mid-IR, while also being characterized by other optical analysis methods. Thin film interference is discussed as it relates to optical techniques and electronic devices. A Stokes polarimeter is used to study the oxide development on the surface of CZT electronic devices, and the effect of natural thin films on substrates used in optical techniques is discussed. In particular, the impact of thin film interference on sum-frequency generation spectroscopy measurements of methyl group orientation are assessed. An FTIR source operated in step-scan mode is used to create a broadband, IR Stokes polarimeter which measures the polarization state of light from 2.5-11 μm simultaneously. Its design, involving two photo-elastic modulators and an analyzer, and theory are described in detail. This instrument is demonstrated by measuring linearly polarized light, and is applied to the measurement of the refractive index dispersion of quartz from 2.5-4 μm, which goes beyond the limits of literature values. Electro-optic crystals of CZT with electrodes of gold and indium are characterized at each wavelength in the mid-IR in terms of their electro-optic effects and apparent depolarization using the Stokes polarimeter. The material displays high-resistivity, allowing it to be operated with up to 5 kV applied DC voltage. The linear electro-optic effect is observed, but overall properties of the samples are found to be heavily dependent on the choice of metal for the electrodes. With a high-work function electrode material in gold, a large depletion region is created when high voltage is applied, which leads to a gradient in electric field throughout the material. This causes a beam of light transmitted through it to experience a distribution of electro-optic behaviours, which leads to overall depolarization of the light. Indium’s work function is lower than gold’s, and is closer to that of CZT. With indium electrodes, the electric field is found to be more consistent, and behaviour is much closer to ideal. The electro-optic effect of CZT is also characterized with AC applied voltage in order to assess its suitability to AC applied voltage applications. The power supply used for this was limited to 60 Hz, which precludes a complete characterization in this regard, but unexpected behaviour was seen. A methodology utilizing an oscilloscope and FTIR was developed in order to more completely understand the material response, and divergent behaviour with positive and negative voltage was found. / Graduate / 2018-12-18
32

Motion correction in high-field MRI

Sulikowska, Aleksandra January 2016 (has links)
The work described in this thesis was conducted at the University of Nottingham in the Sir Peter Mansfield Imaging Centre, between September 2011 and 2014. Subject motion in high- resolution magnetic resonance imaging (MRI) is a major source of image artefacts. It is a very complex problem, due to variety of physical motion types, imaging techniques, or k-space trajectories. Many techniques have been proposed over the years to correct images for motion, all looking for the best practical solution in clinical scanning, which would give cost- effective, robust and high accuracy correction, without decreasing patient comfort or prolonging the scan time. Moreover, if the susceptibility induced field changes due to head rotation are large enough, they will compromise motion correction methods. In this work a method for prospective correction of head motion for MR brain imaging at 7 T was proposed. It would employ innovative NMR tracking devices not presented in literature before. The device presented in this thesis is characterized by a high accuracy of position measurements (0.06 ± 0.04 mm), is considered very practical, and stands the chance to be used in routine imaging in the future. This study also investigated the significance of the field changes induced by the susceptibility in human brain due to small head rotations (±10 deg). The size and location of these field changes were characterized, and then the effects of the changes on the image were simulated. The results have shown that the field shift may be as large as |-18.3| Hz/deg. For standard Gradient Echo sequence at 7 T and a typical head movement, the simulated image distortions were on average equal to 0.5%, and not larger than 15% of the brightest voxel. This is not likely to compromise motion correction, but may be significant in some imaging sequences.
33

A GPU parallel approach improving the density of patch based multi-view stereo reconstruction

Haines, Benjamin A. January 2016 (has links)
Multi-view stereo is the process of recreating three-dimensional data from a set of two or more images of a scene. The ability to acquire 3D data from 2D images is a core concept in computer vision with wide-ranging applications throughout areas such as 3D printing, robotics, recognition, navigation and a vast number of other fields. While 3D reconstruction has been increasingly well studied over the past decades, it is only with the recent evolution of CPU and GPU technologies that practical implementations, able to accurately, robustly and efficiently capture 3D data of photographed objects have begun to emerge. Whilst current research has been shown to perform well under specific circumstances and for a subset of objects, there are still many practical and implementary issues that remain an open problem for these techniques. Most notably, the ability to robustly reconstruct objects from sparse image sets or objects with low texture. Alongside a review of algorithms within the multi-view field, the work proposed in this thesis outlines a massively parallel patch based multi-view stereo pipeline for static scene recovery. By utilising advances in GPU technology, a particle swarm algorithm implemented on the GPU forms the basis for improving the density of patch-based methods. The novelty of such an approach removes the reliance on feature matching and gradient descent to better account for the optimisation of patches within textureless regions, for which current methods struggle. An enhancement to the photo-consistency matching metric, which is used to evaluate the optimisation of each patch, is then defined. Specifically targeting the shortcomings of the photo-consistency metric when used inside a particle swarm optimisation, increasing its effectiveness over textureless areas. Finally, a multi-resolution reconstruction system based on a wavelet framework is presented to further improve upon the robustness of reconstruction over low textured regions.
34

New methodology for optical sensing and analysis

Bakker, Jimmy W. P. January 2004 (has links)
This thesis describes the research I have done, and partly will do, during my time as a PhD student in the laboratory of Applied Optics at Linköping University. Due to circumstances beyond the scope of this book, this incorporates three quite different projects. The first two, involving gas sensing and measuring on paper with ellipsometry, have been discontinued, whereas the third one, measuring fluorescence with a computer screen and web camera, is in full progress and will be until I complete my studies. Thus the purpose of this work also has several aspects. Partly, it describes performed research and its results, as well as theoretical background. On the other hand, it provides practical and theoretical background necessary for future work. While the three projects are truly quite different, each of them has certain things in common with each of the other. This is certainly also true for the necessary theory. Two of them involve spectroscopic ellipsometry, for example, while another pair needs knowledge of color theory, etc. This makes it impossible to separate the projects, despite of their differences. Hopefully, these links between the different projects, connecting the different chapters, will make this work whole and consistent in its own way. / <p>Report code: LiU-TEK-LIC-200 4-19. On the day of the public defence the status of article I was: In press and the status of article III was: Manuscript and has a new title. The old title was Computer screen photo-assisted spectroscopic fluorimetry.</p>
35

Advances in Radiation Heat Transfer and Applied Optics, Including Application of Machine Learning

Yarahmadi, Mehran 14 January 2021 (has links)
Artificial neural networks (ANNs) have been widely used in many engineering applications. This dissertation applies ANNs in the field of radiation heat transfer and applied optics. The topics of interest in this dissertation include both forward and inverse problems. Forward problems involve applications in which numerical simulation is expensive in terms of time consummation and resource utilization. Artificial neural networks can be applied in these problems for speeding up the process and reducing the required resources. The Monte Carlo ray-trace (MCRT) method is the state-of-the-art approach for modeling radiation heat transfer. It has the disadvantage of being a complex and computationally expensive process. In this dissertation, after first identifying the uncertainties associated with the MCRT method, artificial neural networks are proposed as an alternative whose computational cost is greatly reduced compared to traditional MCRT method. Inverse problems are concerned with situations in which the effects of a phenomenon are known but the cause is unknown. In such problems, available data in conjunction with ANNs provide an effective tool to derive an inverse model for recovering the cause of the phenomenon. Two problems are studied in this context. The first is concerned with an imager for which the readout power distribution is available and the viewed scene is of interest. Absorbed power distributions on a microbolometer array making up the imager is produced by discretized scenes using a high-fidelity Monte Carlo ray-trace model. The resulting readout array/scene pairs are then used to train an inverse ANN. It is demonstrated that a properly trained ANN can be utilized to convert the readout power distribution into an accurate image of the corresponding discretized scene. The recovered scene of the imager is helpful for monitoring the Earth's radiant energy budget. In the second problem, the collection of scattered radiation by a sun-photometer, or aureolemeter, is simulated using the MCRT method. The angular distribution of this radiation is summarized using the probability density function (PDF) of the incident angles on a detector. Atmospheric water cloud droplets are known to play an important role in determining the Earth's radiant energy budget and, by extension, the evolution of its climate. An extensive dataset is produced using an improved atmospheric scattering model. This dataset is then used to train and test an inverse ANN capable of recovering water cloud droplets properties from solar aureole observations. / Doctor of Philosophy / This dissertation is intended to extend the research in the field of theoretical and experimental radiation heat transfer and applied optics. It is specifically focused on efforts for more precisely implementing the radiation heat transfer, predicting the temperature evolution of the Earth's ocean-atmosphere system and identifying the atmospheric properties of the water clouds using the tools of Machine learning and artificial neural networks (ANNs). The results of this dissertation can be applied to the conception of advanced radiation and optical modeling tools capable of significantly reducing the computer resources required to model global-scale atmospheric radiation problems. The materials of this thesis are organized for solving the following three problems using ANNs: 1: Application of artificial neural networks into radiation heat transfer: The application of artificial neural networks), which is the basis of AI methodologies, to a variety of real-world problems is an on-going active research area. Artificial intelligence, or machine learning, is a state-of-the-art technology that is ripe for applications in the field of remote sensing and applied optics. Here a deep-learning algorithm is developed for predicting the radiation heat transfer behavior as a function of the input parameters such as surface models and temperature of the enclosures of interest. ANN-based algorithms are very fast, so developing ANN-based algorithms to replace ray trace calculations, whose execution currently dominates the run-time of MCRT algorithms, is useful for speeding up the computational process. 2. Numerical focusing of a wide-field-angle Earth radiation budget imager using an Artificial Neural Network: Traditional Earth radiation budget (ERB) instruments consist of downward-looking telescopes in low earth orbit (LOE) which scan back and forth across the orbital path. While proven effective, such systems incur significant weight and power penalties and may be susceptible to eventual mechanical failure. This dissertation intends to support a novel approach using ANNs in which a wide-field-angle imager is placed in LOE and the resulting astigmatism is corrected algorithmically. The application of this technology is promising to improve the performance of freeform optical systems proposed by NASA for Earth radiation budget monitoring. 3: Recovering water cloud droplets properties from solar aureole photometry using an ANNs: Atmospheric aerosols are known to play an important role in determining the Earth's radiant energy budget and, by extension, the evolution of its climate. Data obtained during aerosol field studies have already been used in the vicarious calibration of space-based sensors, and they could also prove useful in refining the angular distribution models (ADMs) used to interpret the contribution of reflected solar radiation to the planetary energy budget. Atmospheric aerosol loading contributes to the variation in radiance with zenith angle in the circumsolar region of the sky. Measurements obtained using a sun-photometer have been interpreted in terms of the aerosol single-scattering phase function, droplet size distribution, and aerosol index of refraction, all of which are of fundamental importance in understanding the planetary weather and climate. While aerosol properties may also be recovered using lidar, this dissertation proposes to explore a novel approach for recovering them via sun-photometry. The atmospheric scattering model developed here can be used to produce the extensive dataset required to compose, train, and test an artificial neural network capable of recovering water cloud droplet properties from solar aureole observations.
36

Studies On The Effect Of Closed Loop Controls On The Stability Of High Repetition Rate Copper Vapour Laser Pumped Dye Laser

Saxena, Piyush 10 1900 (has links)
Copper vapour laser (CVL) pumped high repetition rate narrow bandwidth dye laser is an important source of tunable radiation. It finds numerous applications in spectroscopic investigations and selective material processing like atomic vapour laser isotope separation (AVLIS). Being wavelength selective in these applications stability of the output wavelength and bandwidth are extremely important. The stability of these parameters depend upon the refractive index fluctuation of the dye medium, due to pump beam induced temperature gradients, dye solution flow, and mechanical stability of optical components. Precise measurement of wavelength and bandwidth of a dye laser and control over parameters governing the variations are important for any stable dye laser system. In this thesis, details of investigations carried out on a Rhodamine 6G dye laser for obtaining stable wavelength and output power are presented. Parameters that affect the stability were identified, monitored and put on close loop control to achieve the desired stability. Pump beam i.e. CVL optical power, dye flow rate and dye solution temperature are mainly these parameters. CVL power is mainly a function of input electrical power and pressure of the buffer gas inside the tube. To monitor and regulate these parameters, different sensors and actuators were selected and interfaced with a master slave topology based data acquisition and control system. The DAQ and control system is designed around a micro controller card based on advanced CPU P80552 and has on chip 8 channel 10 bit multiplexed analog input, 16 TTL digital inputs and 16 digital outputs. It works as slave and PC as master. Following closed loops were designed and incorporated to maintain a stable output: a. Average output of CVL was maintained constant by regulating the electric input power through closed loop control. b. The buffer gas pressure was monitored with a semiconductor pressure sensor and was regulated using pulse width modulation. c. Temperature of the dye solution was monitored with PT100 and was controlled using proportional controller. d. Flow rate of dye solution was controlled using a variable frequency drive (VFD) for the dye circulation pump. e. The dye laser wavelength was monitored by using a high resolution spectrograph and pixel position of the peak from CCD image obtained from spectrograph is used for feedback correction using a pico motor. In the present work with application of the above-mentioned input power and pressure loops, a stable output of CVL, is achieved. Variations in power and pulse width of CVL are got limited to within 2%, from 10% when CVL system was working unregulated. This control system does the line regulations and corrects the input electrical power if variations in discharge current occur due to pressure variation. Every dye cell has limits on flow rate because of its geometry. With flow and temperature control dye cell was characterized to work with lower linewidth. VFD (variable frequency drive) is used for flow regulation. Finally active control on set wavelength was also achieved with resolution of 0.01nm accuracy. Measurement of wavelength was done with 0.3 m, 0.054 nm resolution spectrograph. Closed loop pico motor with 30 nm per step linear resolution was used for wavelength control. The thesis is organized in four chapters. First chapter presents a brief introduction to high repetition rate CVL pumped dye laser, operation of a CVL and parameters affecting the dye laser stability and their control schemes. Literature survey in this chapter is focused on different control mechanisms used with such lasers. Second chapter describes the laser system and interfacing of data acquisition system used for experimental setup. Closed loop controls for different parameters are described in this chapter. It also describes the software algorithms developed for this work. Third chapter presents experimental results and analysis with discussion on performance of the control loops. Finally the conclusion is given and few suggestions are made for further work.
37

Denoising And Inpainting Of Images : A Transform Domain Based Approach

Gupta, Pradeep Kumar 07 1900 (has links)
Many scientific data sets are contaminated by noise, either because of data acquisition process, or because of naturally occurring phenomena. A first step in analyzing such data sets is denoising, i.e., removing additive noise from a noisy image. For images, noise suppression is a delicate and a difficult task. A trade of between noise reduction and the preservation of actual image features has to be made in a way that enhances the relevant image content. The beginning chapter in this thesis is introductory in nature and discusses the Popular denoising techniques in spatial and frequency domains. Wavelet transform has wide applications in image processing especially in denoising of images. Wavelet systems are a set of building blocks that represent a signal in an expansion set involving indices for time and scale. These systems allow the multi-resolution representation of signals. Several well known denoising algorithms exist in wavelet domain which penalize the noisy coefficients by threshold them. We discuss the wavelet transform based denoising of images using bit planes. This approach preserves the edges in an image. The proposed approach relies on the fact that wavelet transform allows the denoising strategy to adapt itself according to directional features of coefficients in respective sub-bands. Further, issues related to low complexity implementation of this algorithm are discussed. The proposed approach has been tested on different sets images under different noise intensities. Studies have shown that this approach provides a significant reduction in normalized mean square error (NMSE). The denoised images are visually pleasing. Many of the image compression techniques still use the redundancy reduction property of the discrete cosine transform (DCT). So, the development of a denoising algorithm in DCT domain has a practical significance. In chapter 3, a DCT based denoising algorithm is presented. In general, the design of filters largely depends on the a-priori knowledge about the type of noise corrupting the image and image features. This makes the standard filters to be application and image specific. The most popular filters such as average, Gaussian and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high frequency details making the image non-smooth. An integrated approach to design filters based on DCT is proposed in chapter 3. This algorithm reorganizes DCT coefficients in a wavelet transform manner to get the better energy clustering at desired spatial locations. An adaptive threshold is chosen because such adaptively can improve the wavelet threshold performance as it allows additional local information of the image to be incorporated in the algorithm. Evaluation results show that the proposed filter is robust under various noise distributions and does not require any a-priori Knowledge about the image. Inpainting is another application that comes under the category of image processing. In painting provides a way for reconstruction of small damaged portions of an image. Filling-in missing data in digital images has a number of applications such as, image coding and wireless image transmission for recovering lost blocks, special effects (e.g., removal of objects) and image restoration (e.g., removal of solid lines, scratches and noise removal). In chapter 4, a wavelet based in painting algorithm is presented for reconstruction of small missing and damaged portion of an image while preserving the overall image quality. This approach exploits the directional features that exist in wavelet coefficients in respective sub-bands. The concluding chapter presents a brief review of the three new approaches: wavelet and DCT based denoising schemes and wavelet based inpainting method.
38

An investigation of automatic processing techniques for time-lapse microscope images

Li, Yuexiang January 2016 (has links)
The analysis of time-lapse microscope images is a recent popular research topic. Processing techniques have been employed in such studies to extract important information about cells—e.g., cell number or alterations of cellular features—for various tasks. However, few studies provide acceptable results in practical applications because they cannot simultaneously solve the core challenges that are shared by most cell datasets: the image contrast is extremely low; the distribution of grey scale is non-uniform; images are noisy; the number of cells is large, etc. These factors also make manual processing an extremely laborious task. To improve the efficiency of related biological analyses and disease diagnoses. This thesis establishes a framework in these directions: a new segmentation method for cell images is designed as the foundation of an automatic approach for the measurement of cellular features. The newly proposed segmentation method achieves substantial improvements in the detection of cell filopodia. An automatic measuring mechanism for cell features is established in the designed framework. The measuring component enables the system to provide quantitative information about various cell features that are useful in biological research. A novel cell-tracking framework is constructed to monitor the alterations of cells with an accuracy of cell tracking above 90%. To address the issue of processing speed, two fast-processing techniques have been developed to complete edge detection and visual tracking. For edge detection, the new detector is a hybrid approach that is based on the Canny operator and fuzzy entropy theory. The method calculates the fuzzy entropy of gradients from an image to decide the threshold for the Canny operator. For visual tracking, a newly defined feature is employed in the fast-tracking mechanism to recognize different cell events with tracking accuracy: i.e., 97.66%, and processing speed, i.e., 0.578s/frame.
39

Estimation Of Object Shape From Scattered Field

Buvaneswari, A 11 1900 (has links)
The scattered field from an object, when illuminated with ultrasound, is useful in the reconstruction of it's cross section - a problem broadly classified as 'tomography'. In many situations of medical imaging, we will be interested in getting to know the location and the extent of growth of the inhomogeneity. The Maximum Likelihood (ML) estimation of the location and the shape parameters (of scale and orientation angle), has been done along with the corresponding CR bounds, for the case of weakly scattering objects, where the Fourier Diffraction Theorem(FDT) holds. It has been found that the a-priori information of a reference object function helps in drastic reduction of the number of receivers and illuminations required. For a polygonal object, the shape is specified, when the corner locations are known. We have formulated the problem as, estimation of the frequencies of sum of undamped sinusoids. The result is a substantial reduction in the number of illuminations and receivers required. For acoustically soft and rigid polygons, where the FDT does not hold, the necessary theory is developed to show the dependence of the scattered field on the corner location, using an On Surface Radiation Condition(OSRC). The corner locations are estimated along similar lines, to the one adopted for the weakly scattering objects.
40

Straegies For Rapid MR Imaging

Sinha, Neelam 06 1900 (has links)
In MR imaging, techniques for acquisition of reduced data (Rapid MR imaging) are being explored to obtain high-quality images to satisfy the conflicting requirements of simultaneous high spatial and temporal resolution, required for functional studies. The term “rapid” is used because reduction in the volume of data acquisition leads to faster scans. The objective is to obtain high acceleration factors, since it indicates the ability of the technique to yield high-quality images with reduced data (in turn, reduced acquisition time). Reduced data acquisition in conventional (sequential) MR scanners, where a single receiver coil is used, can be achieved either by acquiring only certain k-space regions or by regularly undersampling the entire data in k-space. In parallel MR scanners, where multiple receiver coils are used to acquire high-SNR data, reduced data acquisition is typically accomplished using regular undersampling. Optimal region selection in the 3D k-space (restricted to ky - kz plane, since kx is the readout direction) needs to satisfy “maximum energy compaction” and “minimum acquisition” requirements. In this thesis, a novel star-shaped truncation window is proposed to increase the achievable acceleration factor. The proposed window monotonically cuts down the acquisition of the number of k-space samples with lesser energy. The truncation window samples data within a star-shaped region centered around the origin in the ky - kz plane. The missing values are extrapolated using generalized series modeling-based methods. The proposed method is applied to several real and synthetic data sets. The superior performance of the proposed method is illustrated using the standard measures of error images and uptake curve comparisons. Average values of slope error in estimating the enhancement curve are obtained over 5 real data sets of breast and abdomen images, for an acceleration factor of 8. The proposed method results in a slope error of 5%, while the values obtained using rectangular and elliptical windows are 12% and 10%, respectively. k-t BLAST, a popular method used in cardiac and functional brain imaging, involves regular undersampling. However, the method suffers from drawbacks such as separate training scan, blurred training estimates and aliased phase maps. In this thesis, variations to k-t BLAST have been proposed to overcome the drawbacks. The proposed improved k-t BLAST incorporates variable-density sampling scheme, phase information from the training map and utilization of generalized-series extrapolated training map. The advantage of using a variable density sampling scheme is that the training map is obtained from the actual acquisition instead of a separate pilot scan. Besides, phase information from the training map is used, in place of phase from the aliased map; generalized series extrapolated training map is used instead of the zero-padded training map, leading to better estimation of the unacquired values. The existing technique and the proposed variations are applied on real fMRI data volumes. Improvement in PSNR of activation maps of up to 10 dB. Besides, a reduction of 10% in RMSE is obtained over the entire time series of fMRI images. The peak improvement of the proposed method over k-t BLAST is 35%, averaged over 5 data sets. Most image reconstruction techniques in parallel MR imaging utilize the knowledge of coil sensitivities for image reconstruction, along with assumptions of image reconstruction functions. The thesis proposes an image reconstruction technique that neither needs to estimate coil sensitivities nor makes any assumptions on the image reconstruction function. The proposed cartesian parallel imaging using neural networks, called “Composite image Reconstruction And Unaliasing using Neural Networks” (CRAUNN), is a novel approach based on the observation that the aliasing patterns remain the same irrespective of whether the k-space acquisition consists of only low frequencies or the entire range of k-space frequencies. In the proposed approach, image reconstruction is obtained using the neural network framework. Data acquisition follows a variable-density sampling scheme, where low k-space frequencies are densely sampled, while the rest of the k-space is sparsely sampled. The blurred, unaliased images obtained using the densely sampled low k-space data are used to train the neural network. Image is reconstructed by feeding to the trained network, the aliased images, obtained using the regularly undersampled k-space containing the entire range of k-space frequencies. The proposed approach has been applied to the Shepp-Logan phantom as well as real brain MRI data sets. A visual error measure for estimating the image quality used in compression literature, called SSIM (Structural SIMilarity) index is employed. The average SSIM for the noisy Shepp-Logan phantom (SNR = 10 dB) using the proposed method is 0.68, while those obtained using GRAPPA and SENSE are 0.6 and 0.42, respectively. For the case of the phantom superimposed with fine grid-like structure, the average SSIM index obtained with the proposed method is 0.7, while those for GRAPPA and SENSE are 0.5 and 0.37, respectively. Image reconstruction is more challenging with reduced data acquired using non-cartesian trajectories since aliasing introduced is not localized. Popular technique for non-cartesian parallel imaging CGSENSE suffers from drawbacks like sensitivity to noise and requirement of good coil estimates, while radial/spiral GRAPPA requires complete identical scans to obtain reconstruction kernels for specific trajectories. In our work, the proposed neural network based reconstruction method, CRAUNN, has been shown to work for general non-cartesian acquisitions such as spiral and radial too. In addition, the proposed method does not require coil estimates, or trajectory-specific customized reconstruction kernels. Experiments are performed using radial and spiral trajectories on real and synthetic data, and compared with CGSENSE. Comparison of error images shows that the proposed method has far lesser residual aliasing compared to CGSENSE. The average SSIM index for reconstructions using CRAUNN with spirally and radially undersampled data, are comparable at 0.83 and 0.87, respectively. The same measure for reconstructions using CGSENSE are 0.67 and 0.69, respectively. The average RMSE for reconstructions using CRAUNN with spirally and radially undersampled data, are comparable at 11.1 and 6.1, respectively. The same measure for reconstructions using CGSENSE are 16 and 9.18, respectively.

Page generated in 0.0665 seconds