• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1377
  • 588
  • 539
  • 537
  • 491
  • 466
  • 190
  • 136
  • 56
  • 46
  • 46
  • 45
  • 43
  • 42
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Multiscale analysis for off-line handwriting recognition

Sharma, Sanjeer January 2001 (has links)
The aim of this thesis is to investigate how ‘multiscale analysis’ can help to solve some of the problems associated with achieving reliable automatic off-line handwriting recognition based on feature extraction and model matching. The thesis concentrates on recognising off-line handwriting, in which no explicit dynamic information about the act of writing is present. Image curvature has emerged as being an important feature for describing and recognising shapes. However, it is highly susceptible to noise, requiring smoothing of the data. In many systems, smoothing is performed at a pre-determined fixed scale. The feature of this work is that Multiscale analysis is performed by applying Gaussian smoothing over a ‘range’ of octave separated scales. This process not only eliminates noise and unwanted detail, but also highlights and quantifies those features stable over a ‘range’ of scales. Curvature features are extracted by evaluating the 1[sup]st and 2[sup]nd order derivative values for the Gaussian kernels, and a method is proposed for automatically selecting those scales of significance at which to perform optimum matching. A set of describing elements (features) is defined, and combined into a representation known as "codons" for matching. Handwritten characters are recognised in terms of their constituent codons, following the process of multiscale analysis. This is done by extracting codons from a range of octave separated scales, and matching the codons at scales of significance with a database of model codons created for the different types of handwritten characters. Other approaches for matching are reviewed and contrasted, including the use of artificial neural networks. The main contribution of this thesis is the investigation into applying multiscale analysis to ascertain the most appropriate scale(s) at which to perform matching by removing noise, ascertaining, and extracting features that are significant over a range of scales. Importantly, this is performed without having to pre-determine the amount of smoothing required, and therefore avoiding arbitrary thresholds for the amount of smoothing performed. The proposed method shows great potential as a robust approach for recognising handwriting.
352

Parallel machine vision for the inspection of surface mount electronic assemblies

Netherwood, Paul January 1993 (has links)
The aim of this thesis is to analyse and evaluate some of the problems associated with developing a parallel machine vision system applied to the problem of inspection of surface mount electronic assemblies. In particular it analyses the problems associated with 2-D feature and shape extraction. Surface Mount Technology is increasingly being used for manufacturing electronic circuit boards because of its light weight and compactness allowing the use of high pin count packages and greater component density. However with this comes significant problems with regards inspection, especially the inspection of solder joints. Existing inspection systems are either prohibitively expensive for most manufacturers and/or have limited functionality. Consequently a low cost architecture for automated inspection is proposed that would consist of sophisticated machine vision software, running on a fast computing platform, that captures images from a simple optical system. This thesis addresses a specific part of this overall architecture, namely the machine vision software required for 2-D feature and shape extraction. Six stages are identified in 2-D feature and shape extraction: Canny Edge Detection, Hysteresis Thresholding, Linking, Dropout Correction, Shape Description and Shape Abstraction. To evaluate the performance of each stage, each is fully implemented and tested on examples of synthetic data and real data from the inspection problem. After Canny Edge Detection, significant edge points are isolated using Hysteresis Thresholding which determines which edge points are important based on thresholds and connectivity. Edge points on their own do not describe a boundary of an object. A linking algorithm is developed in this thesis which groups edge points to describe the outline of a shape. A process of dropout correction is developed to overcome the problem of missing edge points after Canny and Hysteresis. Connected edges are converted to a more abstract form which facilitates recognition. Shape abstraction: is required to remove minor details on a boundary without removing significant points of interest to extract the underlying shape. Finally these stages are integrated into a demonstrator system. 2-D feature and shape extraction is computationally expensive so a parallel processing system based on a network of transputers is used. Transputers can provide the necessary computational power at a relatively low cost. The 2-D feature and shape extraction software is then required to run in parallel so a distributed form of shape extraction is proposed. This minimises communication overheads and maximises processor usage which increases execution speed. For this, a generic method for routing data around a transputer network, called Spatial Routing, is proposed.
353

Automatic matching of features in Synthetic Aperture Radar data to digital map data

Caves, Ronald George January 1993 (has links)
The large amounts of Synthetic Aperture Radar (SAR) data now being generated demand automatic tools for image interpretation. Where available, map data provides a valuable aid for visual interpretation and it should aid automatic interpretation. Automatic map based interpretation will be heavily dependent on methods for matching image and map features, both for defining the initial registration and for comparing image and map. This thesis investigates methods for carrying out this matching. Before beginning to develop image map matching methods, a full understanding of the nature of SAR data is first required. The general theory of SAR imaging, the effects of speckle and texture on image statistics, multi-look image statistics, and parameter estimation, are all discussed before addressing the main subject matter. Initially the feasibility of directly matching map features to SAR image features is investigated. Simulations based on a simple image model produce promising results. However, the results of matching features in real images are disappointing. This is due to the limitations of the image model on which matching is based. Possible extensions to include texture and correlation are considered to be computationally too expensive. Rather, it is concluded that pre-processing is needed to structure the image prior to matching. Structuring using edge detection and segmentation are investigated. Among operators for detecting edges in SAR an operator based on intensity ratios is identified as the most suitable. Its performance is fully analysed. Segmentation using an iterative edge detection/segment growing algorithm developed at the Royal Signals and Radar Establishment is investigated and various improvements are suggested. The output of segmentation is structured to a higher level than the output of edge detection. Thus the former is the more suitable candidate for map matching. Approaches to matching segmentations to map data are discussed.
354

Human visual system informed perceptual quality assessment models for compressed medical images

Oh, Joonmi January 2000 (has links)
Hospital and clinical environments are rapidly moving toward the digital capture, processing, storage, and transmission of medical images. X-ray cardio-angiograms are used to observe coronary blood flow, diagnose arterial disease and perform coronary angioplasty or bypass surgery. The digital storage and transmission of these cardiovascular images has significant potential to improve patient care. For example, digital images enable electronic archiving, network transmission and useful manipulation of diagnostic information such as image enhancement. The efficient compression of medical images is tremendously important for economical storage and fast transmission, since digitised medical images must be of high-quality, requiring high-resolution and having a large volume in general. The use of lossily compressed images has created a need for the development of objective quality assessment metrics I measuring perceived subjective opinions by viewers for optimal compression rate/distortion trade-off. Quality assessment metrics, based on models of the human visual system, have more accurately predicted perceived quality than traditional error-based objective quality metrics. This thesis presents a proposed Multi-stage Perceptual Quality Assessment (MPQA) model for compressed images. The motivation for the development of a perceptual quality assessment is to measure (in)visible physical differences between original and processed images. MPQA produces visible distortion maps and quantitative error measured informed by considerations of the human visual system. Original and decompressed images are decomposed into different spatial frequency bands and orientations modelling the human cortex. Contrast errors are calculated for each frequency and orientation, and masked as a function of contrast sensitivity and background uncertainty. Spatially masked contrast error measurements are made across frequency bands and orientations to produce a single Perceptual Distortion Visibility Map (PDVM). A Perceptual Quality Rating (PQR) is calculated from the PDVM and transformed into a one to five scale for direct comparison with the Mean Opinion Score (MOS), generally used in subjective rating. For medical applications, acceptable decompressed medical images might be those which are perceptually pleasing, contain no visible artefacts and have no loss in diagnostic content. To investigate this problem, clinical tests identifying diagnostically acceptable image reconstructions is performed and demonstrates that the proposed perceptual quality rating method has better agreement with observers' responses than objective error measurement methods. The vision models presented in the thesis are also implemented in the thresholding and quantisation stages of a compression algorithm. An HVS-informed perceptual thresholding and quantisation method is also shown to produce improved compression ratio performance with less visible distortions.
355

Development of parallel processing algorithms to provide automatic image analysis for medical application

Tsai, Ya-Lin January 1996 (has links)
This thesis describes the development of: (i) an automatic chromosome analysis system capable of producing to a high degree of accuracy and consistency a correct classification for damaged chromosomes at a low cost and (ii) a parallel computer system to enable more rapid chromosome analysis. Chromosomes can be examined in a cytogenetics laboratory for a variety of purposes including an assessment of the affects of ionisation exposure on the genetic code of the cell. Scoring of chromosome aberrations caused by ionisation of radiation exposure, is possible by detecting dicentric chromosomes. In addition this approach provides a good biological radiation measure (dosimeter). However, currently manual methods are extremely time consuming and expensive with respect to labour costs. For the low radiation doses it is necessary to analyse a large number of chromosomes to identify a small number of damaged ones to score the number of aberrations. Consequently, the main objective of this research programme is to develop a rapid, low cost, and accurate automated chromosome analysis system. This research has concentrated solely on scoring dicentric chromosome since their characteristic shape is relatively easy to recognise in most cases and they most commonly created by exposure to radiation. The methods and theories considered in this thesis concerns chromosome image selection by automatic segment extraction using of the following: grey levels; image extraction by seed aggregation, a two dimensional function, a moment algorithm, for chromosome orientation; chromosome centreline determination; rapid detection of the chromosome centromere of the candidate. The new methods developed by the author and presented herein concern three steps or processes in automatic chromosome analysis. These include (i) a new segmentation scheme (ii) automatic selection the cell threshold grey scale level and (iii) the design a new methods capable of detecting bent chromosome with rapid determination the chromosome centromere. Parallel processing using the processor farm technique has been successfully developed to enable a more rapid chromosome classification system. The techniques described have been carefully tested and evaluated and have clearly demonstrated the potential application of the analysis methods by the author.
356

Automatic pattern recognition

Petheram, R. J. January 1989 (has links)
In this thesis the author presents a new method for the location, extraction and normalisation of discrete objects found in digital images. The extraction is by means of sub-pixcel contour following around the object. The normalisation obtains and removes the information concerning size, orientation and location of the object within an image. Analyses of the results are carried out to determine the confidence in recognition of patterns, and methods of cross correlation of object descriptions using Fourier transforms are demonstrated.
357

Computer extraction of human faces

Low, Boon Kee January 1999 (has links)
Due to the recent advances in visual communication and face recognition technologies, automatic face detection has attracted a great deal of research interest. Being a diverse problem, the development of face detection research has comprised contributions from researchers in various fields of sciences. This thesis examines the fundamentals of various face detection techniques implemented since the early 70's. Two groups of techniques are identified based on their approach in applying face knowledge as a priori: feature-based and image-based. One of the problems faced by the current feature-based techniques, is the lack of costeffective segmentation algorithms that are able to deal with issues such as background and illumination variations. As a result a novel facial feature segmentation algorithm is proposed in this thesis. The algorithm aims to combine spatial and temporal information using low cost techniques. In order to achieve this, an existing motion detection technique is analysed and implemented with a novel spatial filter, which itself is proved robust for segmentation of features in varying illumination conditions. Through spatio-temporal information fusion, the algorithm effectively addresses the background and illumination problems among several head and shoulder sequences. Comparisons of the algorithm with existing motion and spatial techniques establishes the efficacy of the combined approach.
358

Quantitative data validation (automated visual evaluations)

Martin, Anthony John Michael January 1999 (has links)
Historically, validation has been perfonned on a case study basis employing visual evaluations, gradually inspiring confidence through continual application. At present, the method of visual evaluation is the most prevalent form of data analysis, as the brain is the best pattern recognition device known. However, the human visual/perceptual system is a complicated mechanism, prone to many types of physical and psychological influences. Fatigue is a major source of inaccuracy within the results of subjects perfonning complex visual evaluation tasks. Whilst physical and experiential differences along with age have an enormous bearing on the visual evaluation results of different subjects. It is to this end that automated methods of validation must be developed to produce repeatable, quantitative and objective verification results. This thesis details the development of the Feature Selective Validation (FSV) method. The FSV method comprises two component measures based on amplitude differences and feature differences. These measures are combined employing a measured level of subjectivity to fonn an overall assessment of the comparison in question or global difference. The three measures within the FSV method are strengthened by statistical analysis in the form of confidence levels based on amplitude, feature or global discrepancies between compared signals. Highly detailed diagnostic infonnation on the location and magnitude of discrepancies is also made available through the employment of graphical (discrete) representations of the three measures. The FSV method also benefits from the ability to mirror human perception, whilst producing infonnation which directly relates human variability and the confidence associated with it. The FSV method builds on the common language of engineers and scientists alike, employing categories which relate to human interpretations of comparisons, namely: 'ideal', 'excellent', 'very good', 'good', 'fair', 'poor' and 'extremely poor' . Quantitative
359

The objective world of CAD visualisation, animation, daylight and sound : the world of reality

Huang, Hsu-Jen January 1998 (has links)
This thesis is based upon the study and analysiso f computerv isualisationa pplicationse ncounteredin architecturapl resentationI.t focuseso n the accuracyo f environmentaple rceptiono f daylighta nds ounda s in computerv isualisations oftware.T he researchh as identifiedp roblemsa ssociatedw ith computers imulation whicha ffectsa rchitecturadle signr epresentatioTn.h el iteraturere viewc arriedo utb yt his researchh ase xplored: the currents tatuso f the "computevr isualisationa nd architecturadl esigna ndp resentationd" ebate;t he basic CADp remisesu nderlyingth e natureo f architecturadle sign;a ndt he problems associatedw itha ttempting to integrate'e nvironmentapl erceptiono f design'a ndc omputerv isualisation. The researchd esignm ethodologyis derivedf rom both the Platonica nd'Aristotelianm odelso f knowledge acquisitionth roughr esearch. Basedo n the Platonicm odelo f knowledgea cquisitionth roughi ntuition,t he studya rguest hat the'intuitiona' ndt he'experienceg' ainedf romc onstructingc omputerm odelsh ase nableda comprehensivken owledgeo f computevr isualisationa nd its applicationisn architecturapl ractice.T he study also adoptst he Aristotelianm odelo f knowledgea cquisitionth roughr eason,a nd followst he 'reasoninga' nd 'laws' of scientificc onsiderationto assesst he performancea nd accuracyo f computerv isualisationfo r the environmentapl erceptiono f daylighta nds ound. Threec ases tudiesd eterminedth e problemsp ertainingto computevr isualisationa,s employedin architectural designr epresentation.C ases tudyo ne explorest he use of CAD visualisationa nd animationa s an aid to researcha rchitecturahl istory. Case study two examinesc omputerv isualisationa nd animationu sage in architecturadl esigne valuationa nd analysis. Cases tudy three investigatesth e problemse ncounteredin developingc omputeriseda rchitecturarle presentationosf the environmentapl erceptiono f sound and light. The applicationo f botht he Platonica ndA ristotelianm odelso f knowledge*acquisititohnro ughr esearcht o the abovec ase studies,h as led to the followingfi ndings: A) Computerisetdh ree-dimensionmal odellingr equiresb othp reciseg eometricaal ndv isuali nformation.T his tool appliedt o architecturahl istoricarl esearchh as necessitateda n accurated ocumentationp rocess,w hich hasr esultedin a deeperu nderstandinogf architecturael lementst,h eirp ositionsa, ndt heir relationshipsw ithin the context. As a result,c omputerv isualisationh as providedm orea ccurates imulationsa nd thus is a useful applicationfo r architecturaul tilisation. B) Int ermso f designe valuationa nda nalysist,h e ObjectB asedM odellinga pproache nablesa morem anageable environmenrte lativet o traditionalp aper-basedm edia. The problemsr emaininga, re those of inputtingt he two or threed imensionavl isuala ndc ontextuadl ata. The accuracyo f computerisedm odellingis an essential premiset hat enhancesC AD visualisationa s a successfutlo ol for desigrie valuationa nd analysis. C) Environmentpael rceptioins notf ullys upportebdy CADv isualisatioann da nimationT. hea ppearancoef lightingc onditionds ependsu pont he visualisatiodne signer'sin terpretatioann dm anipulatioonf naturaol r artificialli ghting.T hea ccuracyo f lightings imulationis subjectivea,n dt herefored oubtful.T he subjective soundp resentation(musmica),y b eo fv aluein o rdertoe stablisahe sthetipcr incipleosf a rchitecturparle sentation whereit canh elpa tangibled esignto bee xpresseadn dc ommunicatethdr ougha n intangiblem edium.T he environmentpael rceptioonf backgrounsdo undI noiseis missingin computerisepdre sentatiobne, causeo f its unavailabiliitny the programmeI.n additiont,h e capabilityo f inputtingd ata,s ucha s lightingin tensitya nd noisele velsis alsol acking.T hisi nformatiosnh ouldb et akeni ntoa ccounitn futurem odellingsy stems. Buildinugp onth isr esearchg,u idelineasr ee stablishethda tw illb eu sefuflo ra rchitectsc,o mputepr rogrammers ando therr esearcherins furtherr esearcahn dd evelopmeonft CADv isualisatioann da nimationI.n response to specificr equiremenotsf differenpt rofessionsth, is studyp roposesa multi-disciplinaarpyp roachto CAD visualisatioann da nimatioinn ordert o developa nda chievere alistica nde ffectiveco mputevr isualisation
360

An object oriented model of machine vision

Brown, Gary January 1997 (has links)
In this thesis an object oriented model is proposed that satisfies the requirements for a generic, customisable, reusable and flexible machine vision framework. These requirements are identified as being: ease of customisation for a particular application domain; independence from image definition; independence from shape representation scheme; ability to add new domain specific shape descriptors; independence from implemented machine vision algorithms; and the ability to maximise reuse of the generic framework. The thesis begins with a review of key machine vision functions and traditional architectures. In particular, machine vision architectures predicated on a process oriented framework are examined in detail and evaluated against the criteria stated above. An object oriented model is developed within the thesis, identifying the key classes underlying the machine vision domain. The responsibilities of these classes, and the relationships between them, are analysed in the context of high level machine vision tasks, for example object recognition. This object oriented approach is then contrasted with the more traditional process oriented approach. The object oriented model and framework is subsequently evaluated through a customisation, to illustrate an example machine vision application, namely Surface Mounted Electronic Assembly inspection. The object oriented model is also evaluated in the context of two functional machine vision applications described in literature. The model developed in this thesis incorporates the fundamental object oriented concepts of abstraction, encapsulation, inheritance and polymorphism. The results show that an object oriented approach does achieve the requirements for a generic, customisable, reusable and flexible machine vision framework.

Page generated in 0.0475 seconds