• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2663
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6275
  • 6275
  • 2011
  • 1528
  • 1196
  • 1150
  • 1033
  • 1003
  • 952
  • 927
  • 896
  • 805
  • 771
  • 661
  • 660
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Parallel image reconstruction systems for magnetic induction tomography

Yasheng, Maimaitijiang January 2009 (has links)
Magnetic Induction Tomography (MIT) is a contactless and non-invasive method for imaging the passive electrical properties of objects. An MIT system employs an array of excitation coils in order to induce eddy currents within an object and then uses detection coils to measure the resulting magnetic field perturbations. An image of conductivity distribution within the object is reconstructed by iteratively solving a non-linear inverse problem. As a relatively new imaging technique, MIT encounters several challenges both in terms of the formulation of the algorithms and the computational intensity needed for industrial and medical applications. This is because such real life models are often necessarily large and complex, and the computation is consequently highly time consuming. For instance, in the case of stroke detection and monitoring, which is one of the potential medical applications of MIT, it has been a major challenge to develop realistic computer brain models which allow the reconstruction of high resolution images within practical time frames. Parallel implementation is an obvious solution for addressing such computational challenges. Therefore, the development of a fast and efficient 3D image reconstruction solver and its implementation in parallel platforms are very important, as this will lead to considerable improvements in this field and allow the use of MIT in both medical and industrial applications. This thesis investigates potential hardware architectures, efficient parallel algorithms and optimisation methods for MIT. In this study, an efficient 3D iterative image reconstruction algorithm was developed using the reciprocity method and was shown to provide better absolute conductivity images of sample than existing work. Significant improvements in computation time were achieved by the parallel implementations of both forward and inverse model parts of the image reconstruction algorithm. These implementations were developed, tested and compared across many hardware platforms using various parallelisation approaches. Progresses made in this study will invariably hasten future developments of MIT as a real-life and low-cost imaging modality with many potential applications in the medical and industrial arena.
282

Nonlinear smoothers for digital image processing

Cloete, Eric January 1997 (has links)
Thesis (DTech(Business Informatics))--Cape Technikon, Cape Town, 1997 / Modem applications in computer graphics and telecommunications command high performance filtering and smoothing to be implemented. The recent development of a new class of max-min selectors for digital image processing is investigated with special emphasis on the practical implications for hardware and software design.
283

The selection and evaluation of grey-level thresholds applied to digital images

Brink, Anton David January 1988 (has links)
Many applications of image processing require the initial segmentation of the image by means of grey-level thresholding. In this thesis, the problems of automatic threshold selection and evaluation are addressed in order to find a universally applicable thresholding method. Three previously proposed threshold selection techniques are investigated, and two new methods are introduced. The results of applying these methods to several different images are evaluated using two threshold evaluation techniques, one subjective and one quantitative. It is found that no threshold selection technique is universally acceptable, as different methods work best with different images and applications
284

Image-based face recognition under varying pose and illuminations conditions

Du, Shan 05 1900 (has links)
Image-based face recognition has attained wide applications during the past decades in commerce and law enforcement areas, such as mug shot database matching, identity authentication, and access control. Existing face recognition techniques (e.g., Eigenface, Fisherface, and Elastic Bunch Graph Matching, etc.), however, do not perform well when the following case inevitably exists. The case is that, due to some variations in imaging conditions, e.g., pose and illumination changes, face images of the same person often have different appearances. These variations make face recognition techniques much challenging. With this concern in mind, the objective of my research is to develop robust face recognition techniques against variations. This thesis addresses two main variation problems in face recognition, i.e., pose and illumination variations. To improve the performance of face recognition systems, the following methods are proposed: (1) a face feature extraction and representation method using non-uniformly selected Gabor convolution features, (2) an illumination normalization method using adaptive region-based image enhancement for face recognition under variable illumination conditions, (3) an eye detection method in gray-scale face images under various illumination conditions, and (4) a virtual pose generation method for pose-invariant face recognition. The details of these proposed methods are explained in this thesis. In addition, we conduct a comprehensive survey of the existing face recognition methods. Future research directions are pointed out. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
285

Efficient reconstruction of 2D images and 3D surfaces

Huang, Hui 05 1900 (has links)
The goal of this thesis is to gain a deep understanding of inverse problems arising from 2D image and 3D surface reconstruction, and to design effective techniques for solving them. Both computational and theoretical issues are studied and efficient numerical algorithms are proposed. The first part of this thesis is concerned with the recovery of 2D images, e.g., de-noising and de-blurring. We first consider implicit methods that involve solving linear systems at each iteration. An adaptive Huber regularization functional is used to select the most reasonable model and a global convergence result for lagged diffusivity is proved. Two mechanisms---multilevel continuation and multigrid preconditioning---are proposed to improve efficiency for large-scale problems. Next, explicit methods involving the construction of an artificial time-dependent differential equation model followed by forward Euler discretization are analyzed. A rapid, adaptive scheme is then proposed, and additional hybrid algorithms are designed to improve the quality of such processes. We also devise methods for more challenging cases, such as recapturing texture from a noisy input and de-blurring an image in the presence of significant noise. It is well-known that extending image processing methods to 3D triangular surface meshes is far from trivial or automatic. In the second part of this thesis we discuss techniques for faithfully reconstructing such surface models with different features. Some models contain a lot of small yet visually meaningful details, and typically require very fine meshes to represent them well; others consist of large flat regions, long sharp edges (creases) and distinct corners, and the meshes required for their representation can often be much coarser. All of these models may be sampled very irregularly. For models of the first class, we methodically develop a fast multiscale anisotropic Laplacian (MSAL) smoothing algorithm. To reconstruct a piecewise smooth CAD-like model in the second class, we design an efficient hybrid algorithm based on specific vertex classification, which combines K-means clustering and geometric a priori information. Hence, we have a set of algorithms that efficiently handle smoothing and regularization of meshes large and small in a variety of situations. / Science, Faculty of / Mathematics, Department of / Graduate
286

HandsFree: a marker-free visual based input prototype for menu driven systems

Visser, Willem 10 March 2010 (has links)
M.Ing. / This dissertation proposes a marker-free visual based interface device to be used with menu driven systems. This system, called HandsFree, uses the Graphics Processing Unit (GPU) together with Shader technology to perform the image processing. HandsFree makes use of a web camera to gain user input without requiring elementary computer skills. Background subtraction was used to extract user input from the images. The problems usually obtained with background subtraction were overcome by using an averaging technique. Test results proved HandsFree to be robust against different coloured backgrounds and skin tones, different lighting intensity and sudden change in lighting intensity.
287

Towards in vitro MRI based analysis of spinal cord injury

Ming, Kevin 11 1900 (has links)
A novel approach for the analysis of spinal cord deformation based on a combined technique of non-invasive imaging and medical image processing is presented. A sopposed to traditional approaches where animal spinal cords are exposed and directly subjected to mechanical impact in order to be examined, this approach can be used to quantify deformities of the spinal cord in vivo, so that deformations — specifically those of myelopathy-related sustained compression — of the spinal cord can be computed in its original physiological environment. This, then, allows for a more accurate understanding of spinal cord deformations and injuries. Images of rat spinal cord deformations, acquired using magnetic resonance imaging (MRI), were analyzed using a combination of various image processing methods, including image segmentation, a versor-based rigid registration technique, and a B-spline-based non-rigid registration technique. To verify the validity and assess the accuracy of this approach, several validation schemes were implemented to compare the deformation fields computed by the proposed algorithm against known deformation fields. First, validation was performed on a synthetically-generated spinal cord model data warped using synthetic deformations; error levels achieved were consistently below 6% with respect to cord width, even for large degrees of deformation up to half of the dorsal-ventral width of the cord (50% deflection). Then, accuracy was established using in vivo rat spinal cord images warped using those same synthetic deformations; error levels achieved were also consistently below 6% with respect to cord width, in this case for large degrees of deformation up to the entire dorsal-ventral width of the cord (100% deflection). Finally, the accuracy was assessed using data from the Visible Human Project (VHP) warped using simulated deformations obtained from finite element (FE) analysis of the spinal cord; error levels achieved were as low as 3.9% with respect to cord width. This in vivo, non-invasive semi-automated analysis tool provides a new framework through which the causes, mechanisms, and tolerance parameters of myelopathy-related sustained spinal cord compression, as well as the measures used in neuroprotection and regeneration of spinal cord tissue, can be prospectively derived in a manner that ensures the bio-fidelity of the cord. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
288

On the recovery of images from partial information using [delta]²G filtering

Reimer, James Allen January 1987 (has links)
This thesis considers the recovery of a sampled image from partial information, based on the 'edges' or zero crossings found in ∇²G filtered versions of the image. A scheme is presented for separating an image into a family of multiresolution images, using low pass filtering, subsampling, and ∇²G filtering. A scheme is also presented for merging this family of ∇²G filtered images to rebuild the original. The recovery of each of the ∇²G filtered images from their 'edges' or zero crossings is then considered. It has been suggested that ∇²G filtered images might be characterized by their zero crossing locations. It is shown that ∇²G filtered images, filtered in 1-D or 2-D are not, in general, uniquely given within a scalar by their zero crossing locations. Two theorems in support of such a suggestion are considered. The differences between the constraints of Logan's theorem and ∇²G filtering are considered, and it is shown that the zero crossings which result from these two situations differ significantly in number and location. Logan's theorem is therefore not applicable to ∇²G filtered images. A recent theorem by Curtis on the adequacy of zero crossings of 2-D functions is also considered. It is shown that the requirements of Curtis' theorem are not satisfied by all ∇²G filtered images. Further, it is shown that it is very difficult to establish if an image meets the requirements of Curtis' theorem. Examples of different ∇²G filtered images with the same zero crossings are also presented. While not all ∇²G filtered images are uniquely characterized by their zero crossing locations, the practical recovery of real camera images from this partial information is considered. An iterative scheme is developed for the reconstruction of a ∇²G filtered image from its sampled zero crossings. The zero crossing samples are localized to the original image sample grid. Experimental results are presented which show that the recovered images, while retaining many of the features of the original, suffer significant loss. It is shown that, in general, the full recovery of these images in a practical situation is not possible from this partial information. From this experimental experience, it is proposed that ∇²G filtered images might be practically recovered from their zero crossings, with some additional characterization of the image in the vicinity of each zero crossing point. A simple, non-iterative scheme is developed for extracting a characterization of the ∇²G filtered image, through the use of an image edge model and a local estimation of a contrast figure in the vicinity of each zero crossing sample. A redrawing algorithm is then used to recover an approximation of the ∇²G filtered image from its zero crossing locations and the extracted characterizations. This system is evaluated using natural scene and synthetic images. Resulting image quality is good, but is shown to vary depending on the nature of the image. The advantages and disadvantages of this technique are discussed. The primary shortcoming of the implemented local estimation technique is an assumption of edge independence. A second approach is developed for characterizing the ∇²G filtered image zero crossings, which eliminates this assumption. This method is based on 2-D filtering, and provides a new technique for the recovery of a ∇²G filtered image from its sampled zero crossings. The method does not involve iteration or the solution of simultaneous equations. Good image reconstruction is shown for natural scene images, with the ∇²G filtered image zero crossings localized only to the original image sample grid. The advantages and disadvantages of this technique are discussed. The application of this recovery from partial information technique is then considered for image compression. A simple coding scheme is developed for representing the zero crossing segments with linear vector segments. A comparative study is then considered, examining the tradeoffs between compression tuning parameters and the resulting recovered image quality. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
289

A nonlinear switched-capacitor network for edge detection in early vision

Barman, Roderick A. January 1990 (has links)
A nonlinear switched-capacitor (SC) network for solving the early vision variational problem of edge detection has been designed and constructed using standard SC techniques and a novel nonlinear externally controlled SC resistive element. This new SC element allows, to a limited extent, the form of the variational problem to be "programmable". This allows nonconvex variational problems to be solved by the network using continuation-type methods. Appropriately designed SC networks are guaranteed to converge to a locally stable steady-state. As well, SC networks offer increased accuracy over analog networks composed of nonlinear resistances built from multiple MOSFETs. The operation of the network was analyzed and found to be equivalent to the numerical analysis minimization algorithm of gradient descent. The network's capabilities were demonstrated by "programming" the network to perform the graduated nonconvexity algorithm. A high-level functional network simulation was used to verify the correct operation of the GNC algorithm. A one-dimensional six node CMOS VLSI test chip was designed, simulated and submitted for fabrication. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
290

Towards automated, precise and validated vectorisation of disparity maps in urban satellites stereoscopy / Vers une vectorisation précise, automatique et validée en stéréoscopie satellitaire en milieu urbain

Bughin, Eric 26 October 2011 (has links)
Cette thèse se porte sur la segmentation affine par régions des cartes de profondeurs obtenues par stéréoscopie en milieu urbain.Cette détection est constituée de 3 étapes.Tout d'abord, un modèle statistique a contrario est introduit afin de déterminer de manière automatique la valeur de certains paramètres courants dans ce genre de problèmes: seuil de validation d'un groupe comme plan, seuil de rejet des points aberrants. Ce modèle permet de plus la comparaison de plusieurs solutions.Dans un second temps, un algorithme glouton est proposé pour obtenir la segmentation plane par morceaux y compris dans des conditions parcimonieuses. Cet algorithme est basé uniquement sur l'information 3D fournie par la carte de profondeur.Enfin, une dernière étape est proposée afin d'affiner le résultat de la segmentation notamment dans les zones où les disparités sont inconnues ou éventuellement fausses. Cette dernière étape est basée sur la combinaison des informations 3D et images de la paire stéréo. / This thesis deals with the piecewise-affine segmentation of disparity maps and range images in urban environment.The detection is three-stepped.First, an a contrario model is defined to determine automatically the values of several parameters common to those types of problems: planar validation of groups, rejection threshold of outliers.Then a greedy algorithm is proposed to achieve the piecewise planar segmentation including in presence of sparse disparity maps. This algorithm is uniquely based on the 3D information of the disparity map.At last, a refinement of the segmentation is proposed for the regions where the disparity is either imprecise or unknown. This step is based on the combination of both 3D information and images of the stereo pair

Page generated in 0.1221 seconds