• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 8
  • 2
  • Tagged with
  • 62
  • 62
  • 30
  • 25
  • 20
  • 19
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Analysis Of Multi-lingual Documents With Complex Layout And Content

Pati, Peeta Basa 11 1900 (has links)
A document image, beside text, may contain pictures, graphs, signatures, logos, barcodes, hand-drawn sketches and/or seals. Further, the text blocks in an image may be in Manhattan or any complex layout. Document Layout Analysis is an important preprocessing step before subjecting any such image to OCR. Here, the image with complex layout and content is segmented into its constituent components. For many present day applications, separating the text from the non-text blocks is sufficient. This enables the conversion of the text elements present in the image to their corresponding editable form. In this work, an effort has been made to separate the text areas from the various kinds of possible non-text elements. The document images may have been obtained from a scanner or camera. If the source is a scanner, there is control on the scanning resolution, and lighting of the paper surface. Moreover, during the scanning process, the paper surface remains parallel to the sensor surface. However, when an image is obtained through a camera, these advantages are no longer available. Here, an algorithm is proposed to separate the text present in an image from the clutter, irrespective of the imaging technology used. This is achieved by using both the structural and textural information of the text present in the gray image. A bank of Gabor filters characterizes the statistical distribution of the text elements in the document. A connected component based technique removes certain types of non-text elements from the image. When a camera is used to acquire document images, generally, along with the structural and textural information of the text, color information is also obtained. It can be assumed that text present in an image has a certain amount of color homogeneity. So, a graph-theoretical color clustering scheme is employed to segment the iso-color components of the image. Each iso-color image is then analyzed separately for its structural and textural properties. The results of such analyses are merged with the information obtained from the gray component of the image. This helps to separate the colored text areas from the non-text elements. The proposed scheme is computationally intensive, because the separation of the text from non-text entities is performed at the pixel level Since any entity is represented by a connected set of pixels, it makes more sense to carry out the separation only at specific points, selected as representatives of their neighborhood. Harris' operator evaluates an edge-measure at each pixel and selects pixels, which are locally rich on this measure. These points are then employed for separating text from non-text elements. Many government documents and forms in India are bi-lingual or tri-lingual in nature. Further, in school text books, it is common to find English words interspersed within sentences in the main Indian language of the book. In such documents, successive words in a line of text may be of different scripts (languages). Hence, for OCR of these documents, the script must be recognized at the level of words, rather than lines or paragraphs. A database of about 20,000 words each from 11 Indian scripts1 is created. This is so far the largest database of Indian words collected and deployed for script recognition purpose. Here again, a bank of 36 Gabor filters is used to extract the feature vector which represents the script of the word. The effectiveness of Gabor features is compared with that of DCT and it is found that Gabor features marginally outperform the DOT. Simple, linear and non-linear classifiers are employed to classify the word in the feature space. It is assumed that a scheme developed to recognize the script of the words would work equally fine for sentences and paragraphs. This assumption has been verified with supporting results. A systematic study has been conducted to evaluate and compare the accuracy of various feature-classifier combinations for word script recognition. We have considered the cases of bi-script and tri-script documents, which are largely available. Average recognition accuracies for bi-script and tri-script cases are 98.4% and 98.2%, respectively. A hierarchical blind script recognizer, involving all eleven scripts has been developed and evaluated, which yields an average accuracy of 94.1%. The major contributions of the thesis are: • A graph theoretic color clustering scheme is used to segment colored text. • A scheme is proposed to separate text from the non-text content of documents with complex layout and content, captured by scanner or camera. • Computational complexity is reduced by performing the separation task on a selected set of locally edge-rich points. • Script identification at word level is carried out using different feature classifier combinations. Gabor features with SVM classifier outperforms any other feature-classifier combinations. A hierarchical blind script recognition algorithm, involving the recognition of 11 Indian scripts, is developed. This structure employs the most efficient feature-classifier combination at each individual nodal point of the tree to maximize the system performance. A sequential forward feature selection algorithm is employed to. select the most discriminating features, in a case by case basis, for script-recognition. The 11 scripts are Bengali, Devanagari, Gujarati, Kannada, Malayalam, Odiya, Puniabi, Roman. Tamil, Telugu and Urdu.
52

A Study Of Utility Of Smile Profile For Face Recognition

Bhat, Srikrishna K K 08 1900 (has links)
Face recognition is one of the most natural activities performed by the human beings. It has wide range of applications in the areas of Human Computer Interaction, Surveillance, Security etc. Face information of people can be obtained in a non-intrusive manner, without violating privacy. But, robust face recognition which is invariant under varying pose, illumination etc is still a challenging problem. The main aim of this thesis is to explore the usefulness of smile profile of human beings as an extra aid in recognizing people by faces. Smile profile of a person is the sequence of images captured by a camera when the person voluntarily smiles. Using sequence of images instead of a single image will increase the required computational resources significantly. The challenge here is to design a feature extraction technique from a smile sample, which is useful for authentication and is also efficient in terms of storage and computational aspects. There are some experimental evidences which support the claim that facial expressions have some person specific information. But, to the best of our knowledge, systematic study of a particular facial expression for biometrical purposes has not been done so far. The smile profile of human beings, which is captured under some reasonably controlled setup, is used for first time for face recognition purpose. As a first step, we applied two of the recent subspace based face classifiers on the smile samples. We were not able to obtain any conclusive results out of this experiment. Next we extracted features using only the difference vectors obtained from smile samples. The difference vectors depend only on the variations which occur in the corresponding smile profile. Hence any characterization we obtain from such features can be fully attributed to the smiling action. The feature extraction technique we employed is very much similar to PCA. The smile signature that we have obtained is named as Principal Direction of Change(PDC). PDC is a unit vector (in some high dimensional space) which represents the direction in which the major changes occurred during the smile. We obtained a reasonable recognition rate by applying Nearest Neighbor Classifier(NNC) on these features. In addition to that, these features turn out to be less sensitive to the speed of smiling action and minor variations in face detection and head orientation, while capturing the pattern of variations in various regions of face due to smiling action. Using set of experiments on PDC based features we establish that smile has some person specific characteristics. But the recognition rates of PDC based features are less than the recent conventional techniques. Next we have used PDC based features to aid a conventional face classifier. We have used smile signatures to reject some candidate faces. Our experiments show that, using smile signatures, we can reject some of the potential false candidate faces which would have been accepted by the conventional face classifier. Using this smile signature based rejection, the performance of the conventional classifier is improved significantly. This improvement suggests that, the biometric information available in smile profiles does not exist in still images. Hence the usefulness of smile profiles for biometric applications is established through this experimental investigation.
53

Development of a fan-beam optical computed tomography scanner for three-dimensional dosimetry

Campbell, Warren G. 07 September 2010 (has links)
The current state of a prototype fan-beam optical computed tomography scanner for three-dimensional radiation dosimetry has been presented. The system uses a helium-neon laser and a line-generating lens for fan-beam creation. Five photodiode arrays form an approximate arc detector array of 320-elements. Two options of physical collimators provide two levels of scatter-rejection: single-slot (SS) and multi-hole (MH). A pair of linear polarizers has been introduced as a means of light intensity modulation. This work examined: (i) the characterization of system components, (ii) data acquisition & imaging protocols, and (iii) the scanning of an nPAG dosimeter. (i): The polarizer-pair method of light intensity modulation has been calibrated and the polarization sensitivity of the detector array was evaluated. The relationship between detected values and both light intensity and photodiode integration time was examined. This examination indicated the need for an offset correction to treat all data acquired by the system. Data corruption near the edges of each photodiode array was found to cause ring artefacts in image reconstructions. Two methods of extending the dynamic range of the system---via integration time and light intensity---were presented. The use of master absorbent solutions and spectrophotometric data allowed for the preparation of absorption-based and scatter-based samples of known opacities. This ability allowed for the evaluation of the relative scatter-rejection capabilities of the system's two collimators. The MH collimator accurately measured highly-attenuating solutions of both absorption-based and scatter-based agents. The SS collimator experienced some contamination by scattered light with absorption-based agents, and significant contamination with scatter-based agents. Also, using the SS collimator, a `spiking' artefact was observed in highly-attenuating samples of both solution types. (ii): A change in imaging protocol has been described that greatly reduces ring artefacts that plagued the system previously. Scanning parameters related to the reference scan (Io) and data acquisition were evaluated with respect to image noise. Variations in flask imperfections were found to be a significant source of noise. (iii): An nPAG dosimeter was prepared, planned for, irradiated, and imaged using the fan-beam system. In addition to ring artefacts caused by data-corruption, refractive inhomogeneities and particulates in the gelatin were found to cause errors in image reconstructions. Otherwise, contour and percent depth dose comparisons between measured and expected values showed good agreement. Findings have indicated that significant imaging gains may be achieved by performing pre-irradiation and post-irradiation scans of dosimeters.
54

Optical Interrogation of the 'Transient Heat Conduction' in Dielectric Solids - A Few Investigations

Balachandar, S January 2015 (has links) (PDF)
Optically-transparent solids have a significant role in many emerging topics of fundamental and applied research, in areas related to Applied Optics and Photonics. In the functional devices based on them, the presence of ‘time-varying temperature fields’ critically limit their achievable performance, when used particularly for high power laser-related tasks such as light-generation, light-amplification, nonlinear-harmonic conversion etc. For optimization of these devices, accurate knowledge of the material thermal parameters is essential. Many optical and non-optical methods are currently in use, for the reliable estimation of the thermal parameters. The thermal diffusivity is a key parameter for dealing with ‘transient heat transport’ related problems. Although its importance in practical design for thermal management is well understood, its physical meaning however continues to be esoteric. The present effort concerns with a few investigations on the “Optical interrogation of ‘transient thermal conduction’ in dielectric solids”. In dielectric solids, the current understanding is that the conductive heat transport occurs only through phonons relevant to microscopic lattice vibrations. Introducing for the first time, a virtual linear translator motion as the basis for heat conduction in dielectric materials, the present investigation discusses an alternative physical mechanism and a new analytical model for the transient heat conduction in dielectric solids. The model brings into limelight a ‘new law of motion’ and a ‘new quantity’ which can be defined at every point in the material, through which time-varying heat flows resulting in time-varying temperature. Physically, this quantity is a measure for the linear translatory motion resulting from transient heat conduction. For step-temperature excitation it bears a simple algebraic relation to the thermal diffusivity of the material. This relationship helps to define the thermal diffusivity of a dielectric solid as the “translatory motion speed” measured at unit distance from the heat source. A novel two-beam interferometric technique is proposed and corroborated the proposed concept with significant advantages. Two new approaches are introduced to estimate thermal diffusivity of optically transparent dielectric solid; first of them involves measurement of the position dependent velocity of isothermal surface and second one depend on the measurement of position dependent instantaneous velocity of normalized moving intensity points. A ‘new mechanism’ is proposed and demonstrated to visualize, monitor and interrogate optically, the ‘linear translatory motion’ resulting from the transient heat flow due to step- temperature excitation. Two new approaches are introduced, first one is ‘mark’ and ‘track’ approach, it involves a new interaction between sample supporting unsteady heat flow with its ambient and produces optical mark. Thermal diffusivity is estimated by tracking the optical mark. Second one involves measurement of instantaneous velocity of optical mark for different step-temperature at a fixed location to estimate thermal diffusivity. A new inverse method is proposed to estimate thermal diffusivity and thermal conductivity from the volumetric specific heat capacity alone through thought experiment. A new method is proposed to predict volumetric specific heat capacity more accurately from thermal diffusivity.
55

Sensor de frente de onda para uso oftalmológico / Wavefront sensor for ophthalmological use

Jesulino Bispo dos Santos 16 April 2004 (has links)
Este trabalho descreve os passos envolvidos no desenvolvimento de um protótipo de aberroscópio para uso oftalmológico. Este instrumento faz incidir no fundo do olho humano um feixe luminoso de baixa potência e amostra, por meio do método de Hartmann, as frentes de onda da luz espalhada. A partir dos dados coletados, a forma das frentes de onda são reconstituídas e as aberrações eventualmente existentes no olho são calculadas e representadas por intermédio dos polinômios de Zernike. Aqui são expostos os fundamentos deste método, algumas das suas propriedades e limitações. Também é mostrada a caracterização funcional do protótipo desenvolvido, testando-o com elementos ópticos de propriedades conhecidas / This work describes the steps involved in the aberroscope prototype development for ophthalmological use. This instrument injects inside the human eye a low power light beam and sample, by Hartmann method, the wavefronts produced by ocular fundus light scattering. From collected data, the wavefront shape is reconstructed and the eye aberrations that eventually existent are calculated and adjusted by Zernike polynomials. Are discussed the method foundations, some of properties and limitations. Also the functional characterization of the developed prototype is shown, by testing it with optical elements of known properties
56

Image Structures For Steganalysis And Encryption

Suresh, V 04 1900 (has links) (PDF)
In this work we study two aspects of image security: improper usage and illegal access of images. In the first part we present our results on steganalysis – protection against improper usage of images. In the second part we present our results on image encryption – protection against illegal access of images. Steganography is the collective name for methodologies that allow the creation of invisible –hence secret– channels for information transfer. Steganalysis, the counter to steganography, is a collection of approaches that attempt to detect and quantify the presence of hidden messages in cover media. First we present our studies on stego-images using features developed for data stream classification towards making some qualitative assessments about the effect of steganography on the lower order bit planes(LSB) of images. These features are effective in classifying different data streams. Using these features, we study the randomness properties of image and stego-image LSB streams and observe that data stream analysis techniques are inadequate for steganalysis purposes. This provides motivation to arrive at steganalytic techniques that go beyond the LSB properties. We then present our steganalytic approach which takes into account such properties. In one such approach, we perform steganalysis from the point of view of quantifying the effect of perturbations caused by mild image processing operations–zoom-in/out, rotation, distortions–on stego-images. We show that this approach works both in detecting and estimating the presence of stego-contents for a particularly difficult steganographic technique known as LSB matching steganography. Next, we present our results on our image encryption techniques. Encryption approaches which are used in the context of text data are usually unsuited for the purposes of encrypting images(and multimedia objects) in general. The reasons are: unlike text, the volume to be encrypted could be huge for images and leads to increased computational requirements; encryption used for text renders images incompressible thereby resulting in poor use of bandwidth. These issues are overcome by designing image encryption approaches that obfuscate the image by intelligently re-ordering the pixels or encrypt only parts of a given image in attempts to render them imperceptible. The obfuscated image or the partially encrypted image is still amenable to compression. Efficient image encryption schemes ensure that the obfuscation is not compromised by the inherent correlations present in the image. Also they ensure that the unencrypted portions of the image do not provide information about the encrypted parts. In this work we present two approaches for efficient image encryption. First, we utilize the correlation preserving properties of the Hilbert space-filling-curves to reorder images in such a way that the image is obfuscated perceptually. This process does not compromise on the compressibility of the output image. We show experimentally that our approach leads to both perceptual security and perceptual encryption. We then show that the space-filling curve based approach also leads to more efficient partial encryption of images wherein only the salient parts of the image are encrypted thereby reducing the encryption load. In our second approach, we show that Singular Value Decomposition(SVD) of images is useful from the point of image encryption by way of mismatching the unitary matrices resulting from the decomposition of images. It is seen that the images that result due to the mismatching operations are perceptually secure.
57

Algorithms For Geospatial Analysis Using Multi-Resolution Remote Sensing Data

Uttam Kumar, * 03 1900 (has links) (PDF)
Geospatial analysis involves application of statistical methods, algorithms and information retrieval techniques to geospatial data. It incorporates time into spatial databases and facilitates investigation of land cover (LC) dynamics through data, model, and analytics. LC dynamics induced by human and natural processes play a major role in global as well as regional scale patterns, which in turn influence weather and climate. Hence, understanding LC dynamics at the local / regional as well as at global levels is essential to evolve appropriate management strategies to mitigate the impacts of LC changes. This can be captured through the multi-resolution remote sensing (RS) data. However, with the advancements in sensor technologies, suitable algorithms and techniques are required for optimal integration of information from multi-resolution sensors which are cost effective while overcoming the possible data and methodological constraints. In this work, several per-pixel traditional and advanced classification techniques have been evaluated with the multi-resolution data along with the role of ancillary geographical data on the performance of classifiers. Techniques for linear and non-linear un-mixing, endmember variability and determination of spatial distribution of class components within a pixel have been applied and validated on multi-resolution data. Endmember estimation method is proposed and its performance is compared with manual, semi-automatic and fully automatic methods of endmember extraction. A novel technique - Hybrid Bayesian Classifier is developed for per pixel classification where the class prior probabilities are determined by un-mixing a low spatial-high spectral resolution multi-spectral data while posterior probabilities are determined from the training data obtained from ground, that are assigned to every pixel in a high spatial-low spectral resolution multi-spectral data in Bayesian classification. These techniques have been validated with multi-resolution data for various landscapes with varying altitudes. As a case study, spatial metrics and cellular automata based models applied for rapidly urbanising landscape with moderate altitude has been carried out.
58

Experimental And Theoretical Studies Towards The Development Of A Direct 3-D Diffuse Optical Tomographic Imaging System

Biswas, Samir Kumar 01 1900 (has links) (PDF)
Diffuse Optical Tomography is a diagnostic imaging modality where optical parameters such as absorption coefficient, scattering coefficient and refractive index distributions are recovered to form the internal tissue metabolic image. Near-infrared (NIR) light has the potential to be used as a noninvasive means of diagnostic imaging within the human breast. Due to the diffusive nature of light in tissue, computational model-based methods are required for functional imaging. The main goal is to recover the spatial variation of optical properties which shed light on the different metabolic states of tissue and tissue like media. This thesis addresses the issue of quantitative recovery of optical properties of tissue-mimicking phantom and pork tissue using diffuse optical tomography (DOT). The main contribution of the present work is the development of robust, efficient and fast optical property reconstruction algorithms for a direct 3-D DOT imaging system. There are both theoretical and experimental contributions towards the development of an imaging system and procedures to minimize accurate data collection time, overall system automation as well as development of computational algorithms. In nurturing the idea of imaging using NIR light into a fully developed direct 3-D imaging system, challenges from the theoretical and computational aspects have to be met. The recovery of the optical property distribution in the interior of the object from the often noisy boundary measurements on light, is an ill-posed ( and nonlinear) problem. This is particularly true, when one is interested in a direct 3-D image reconstruction instead of the often employed stacking of 2-D cross-sections obtained from solving a set of 2-D DOT problems. In order to render the DOT, a useful diagnostic imaging tool and a robust reconstruction procedure giving accurate and reliable parameter recovery in the scenario, where the number of unknowns far outnumbers the number of independent data sets that can be gathered (for example, the direct 3-D recovery mentioned earlier) is essential. Here, the inversion problem is often solved through iterative methods based on nonlinear optimization for the minimization of a data-model misfit function. An interesting development in this direction has been the development of Broyden’ s and adjoint Broyden’ s methods that avoids direct Jacobian computation in each iteration thereby making the full 3-D a reality. Conventional model based iterative image reconstruction (MoBIIR) algorithm uses Newton’ s and it’s variant methods, where it required repeated evaluation of whole Jacobian, which consumes bulk time in reconstruction process. The explicit secant and adjoint information based fast 2-D/3-D image reconstruction algorithms without repeated evaluation of the Jacobian is proposed in diffuse optical tomography, where the computational time has been decreased many folds by updating the Jacobian successively through low rank update. An alternative route to the iterative solution is attempted by introducing an artificial dynamics in the system and treating the steady-state response of the artificially evolving dynamical system as a solution. The objective is to consider a novel family of pseudo-dynamical 2-D and 3-D systems whose numerical integration in time provides an asymptotic solution to the inverse problem at hand. We convert Gauss-Newton’ s equation for updates into a pseudo-dynamical (PD) form by explicitly adding a time derivative term. As the pseudo-time integration schemes do not need such explicit matrix inversion and depending on the pseudo-time step size, provides for a layer of regularization that in turn helps in superior quality of 2-D and 3-D image reconstruction. A cost effective frequency domain Matlab based 2-D/3-D automated imaging system is designed and built. The complete instrumentation (including PC-based control software) has been developed using a single modulated laser source (wavelength 830nm) and a photo-multiplier tube (PMT). The source and detector fiber change their positions dynamically allowing us to gather data at multiple source and detector locations. The fiber positions are adjusted on the phantom surface automatically for scanning variable size phantoms. A heterodyning scheme was used for reading out the measurement using a lock-in-amplifier. The Matlab program carries out sequence of actions such as instrument control, data acquisition, data organization, data calibration and reconstruction of image. The Gauss-Newton’ s, Broyden’ s, adjoint Broyden’ s and pseudo-time integration algorithms are evaluated using the simulation data as well as data from the experimental DOT system. Validation of the system and the reconstruction algorithms were carried out on a real tissue, a pork tissue with an embedded fat inhomogeneity. The results were found to match the known parameters closely.
59

Studies On Bayesian Approaches To Image Restoration And Super Resolution Image Reconstruction

Chandra Mohan, S 07 1900 (has links) (PDF)
High quality image /video has become an integral part in our day-to-day life ranging from many areas of science, engineering and medical diagnosis. All these imaging applications call for high resolution, properly focused and crisp images. However, in real situations obtaining such a high quality image is expensive, and in some cases it is not practical. In imaging systems such as digital camera, blur and noise degrade the image quality. The recorded images look blurred, noisy and unable to resolve the finer details of the scene, which are clearly notable under zoomed conditions. The post processing techniques based on computational methods extract the hidden information and thereby improve the quality of the captured images. The study in this thesis focuses on deconvolution and eventually blind de-convolution problem of a single frame captured at low light imaging conditions arising from digital photography/surveillance imaging applications. Our intention is to restore a sharp image from its blurred and noisy observation, when the blur is completely known/unknown and such inverse problems are ill-posed/twice ill-posed. This thesis consists of two major parts. The first part addresses deconvolution/blind deconvolution problem using Bayesian approach with fuzzy logic based gradient potential as a prior functional. In comparison with analog cameras, artifacts are visible in digital cameras when the images are enlarged and there is a demand to enhance the resolution. The increased resolution can be in spatial, temporal or even in both the dimensions. Super resolution reconstruction methods reconstruct images/video containing spectral information beyond that is available in the captured low resolution images. The second part of the thesis addresses resolution enhancement of observed monochromatic/color images using multiple frames of the same scene. This reconstruction problem is formulated in Bayesian domain with an aspiration of reducing blur, noise, aliasing and increasing the spatial resolution. The image is modeled as Markov random field and a fuzzy logic filter based gradient potential is used to differentiate between edge and noisy pixels. Suitable priors are adaptively applied to obtain artifact free/reduced images. In this work, all our approaches are experimentally validated using standard test images. The Matlab based programming tools are used for carrying out the validation. The performance of the approaches are qualitatively compared with results of recently proposed methods. Our results turn out to be visually pleasing and quantitatively competitive.
60

Image Reconstruction Based On Hilbert And Hybrid Filtered Algorithms With Inverse Distance Weight And No Backprojection Weight

Narasimhadhan, A V 08 1900 (has links) (PDF)
Filtered backprojection (FBP) reconstruction algorithms are very popular in the field of X-ray computed tomography (CT) because they give advantages in terms of the numerical accuracy and computational complexity. Ramp filter based fan-beam FBP reconstruction algorithms have the position dependent weight in the backprojection which is responsible for spatially non-uniform distribution of noise and resolution, and artifacts. Many algorithms based on shift variant filtering or spatially-invariant interpolation in the backprojection step have been developed to deal with this issue. However, these algorithms are computationally demanding. Recently, fan-beam algorithms based on Hilbert filtering with inverse distance weight and no weight in the backprojection have been derived using the Hamaker’s relation. These fan-beam reconstruction algorithms have been shown to improve noise uniformity and uniformity in resolution. In this thesis, fan-beam FBP reconstruction algorithms with inverse distance back-projection weight and no backprojection weight for 2D image reconstruction are presented and discussed for the two fan-beam scan geometries -equi-angular and equispace detector array. Based on the proposed and discussed fan-beam reconstruction algorithms with inverse distance backprojection and no backprojection weight, new 3D cone-beam FDK reconstruction algorithms with circular and helical scan trajectories for curved and planar detector geometries are proposed. To start with three rebinning formulae from literature are presented and it is shown that one can derive all fan-beam FBP reconstruction algorithms from these rebinning formulae. Specifically, two fan-beam algorithms with no backprojection weight based on Hilbert filtering for equi-space linear array detector and one new fan-beam algorithm with inverse distance backprojection weight based on hybrid filtering for both equi-angular and equi-space linear array detector are derived. Simulation results for these algorithms in terms of uniformity of noise and resolution in comparison to standard fan-beam FBP reconstruction algorithm (ramp filter based fan-beam reconstruction algorithm) are presented. It is shown through simulation that the fan-beam reconstruction algorithm with inverse distance in the backprojection gives better noise performance while retaining the resolution properities. A comparison between above mentioned reconstruction algorithms is given in terms of computational complexity. The state of the art 3D X-ray imaging systems in medicine with cone-beam (CB) circular and helical computed tomography scanners use non-exact (approximate) FBP based reconstruction algorithm. They are attractive because of their simplicity and low computational cost. However, they produce sub-optimal reconstructed images with respect to cone-beam artifacts, noise and axial intensity drop in case of circular trajectory scan imaging. Axial intensity drop in the reconstructed image is due to the insufficient data acquired by the circular-scan trajectory CB CT. This thesis deals with investigations to improve the image quality by means of the Hilbert and hybrid filtering based algorithms using redundancy data for Feldkamp, Davis and Kress (FDK) type reconstruction algorithms. In this thesis, new FDK type reconstruction algorithms for cylindrical detector and planar detector for CB circular CT are developed, which are obtained by extending to three dimensions (3D) an exact Hilbert filtering based FBP algorithm for 2D fan-beam beam algorithms with no position dependent backprojection weight and fan-beam algorithm with inverse distance backprojection weight. The proposed FDK reconstruction algorithm with inverse distance weight in the backprojection requires full-scan projection data while the FDK reconstruction algorithm with no backprojection weight can handle partial-scan data including very short-scan. The FDK reconstruction algorithms with no backprojection weight for circular CB CT are compared with Hu’s, FDK and T-FDK reconstruction algorithms in-terms of axial intensity drop and computational complexity. The simulation results of noise, CB artifacts performance and execution timing as well as the partial-scan reconstruction abilities are presented. We show that FDK reconstruction algorithms with no backprojection weight have better noise performance characteristics than the conventional FDK reconstruction algorithm where the backprojection weight is known to result in spatial non-uniformity in the noise characteristics. In this thesis, we present an efficient method to reduce the axial intensity drop in circular CB CT. The efficient method consists of two steps: the first one is reconstruction of the object using FDK reconstruction algorithm with no backprojection weight and the second is estimating the missing term. The efficient method is comparable to Zhu et al.’s method in terms of reduction in axial intensity drop, noise and computational complexity. The helical scanning trajectory satisfies the Tuy-smith condition, hence an exact and stable reconstruction is possible. However, the helical FDK reconstruction algorithm is responsible for the cone-beam artifacts since the helical FDK reconstruction algorithm is approximate in its derivation. In this thesis, helical FDK reconstruction algorithms based on Hilbert filtering with no backprojection weight and FDK reconstruction algorithm based on hybrid filtering with inverse distance backprojection weight are presented to reduce the CB artifacts. These algorithms are compared with standard helical FDK in-terms of noise, CB artifacts and computational complexity.

Page generated in 1.3733 seconds