• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 906
  • 325
  • 89
  • 81
  • 27
  • 21
  • 19
  • 18
  • 18
  • 18
  • 18
  • 18
  • 17
  • 15
  • 13
  • Tagged with
  • 2341
  • 2341
  • 1000
  • 952
  • 659
  • 621
  • 500
  • 486
  • 401
  • 339
  • 278
  • 276
  • 256
  • 231
  • 218
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Computer recognition of occluded curved line drawings

Adler, Mark Ronald January 1978 (has links)
A computer program has been designed to interpret scenes from PEANUTS cartoons, viewing each scene as a two-dimensional representation of an event in the three-dimensional world. Characters are identified by name, their orientation and body position is described, and their relationship to other objects in the scene is indicated. This research is seen as an investigation of the problems in recognising flexible non-geometric objects which are subject to self-occlusion as well as occlusion by other objects. A hierarchy of models containing both shape and relational information has been developed to deal with the flexible cartoon bodies. Although the region is the basic unit used in the analysis, the hierarchy makes use of intermediate models to group individual regions into larger more meaningful functional units. These structures may be shared at a higher level in the hierarchy. Knowledge of model similarities may be applied to select alternative models and conserve some results of an incorrect model application. The various groupings account for differences among the characters or modifications in appearance due to changes in attitude. Context information plays a key role in the selection of models to deal with ambiguous shapes. By emphasising relationships between regions, the need for a precise description of shape is reduced. Occlusion interferes with the model-based analysis by obscuring the essential features required by the models. Both the perceived shape of the regions and the inter-relationships between them are altered. An heuristic based on the analysis of line junctions is used to confirm occlusion as the cause of the failure of a model-to-region match. This heuristic, an extension of the T-joint techniques of polyhedral domains, deals with "curved" junctions and can be applied to cases of multi-layered occlusion. The heuristic was found to be most effective in dealing with occlusion between separate objects; standard instances of self-occlusion were more effectively handled at the model level. This thesis describes the development of the program, structuring the discussion around three main problem areas: models, occlusion, and the control aspects of the system. Relevant portions of the programs analyses are used to illustrate each problem area.
432

Smooth relevance vector machines

Schmolck, Alexander January 2008 (has links)
Regression tasks belong to the set of core problems faced in statistics and machine learning and promising approaches can often be generalized to also deal with classification, interpolation or denoising problems. Whereas the most widely used classical statistical techniques place severe a priori constraints on the type of function that can be approximated (e.g. only lines, in the case of linear regression), the successes of sparse kernel learners, such as the SVM (support vector machine) demonstrate that good results may be obtained in a quite general framework by enforcing sparsity. Similarly, even very simple sparsity-based denoising techniques, such as classical wavelet shrinkage, can produce surprisingly good results on a wide variety of different signals, because, unlike noise, most signals of practical interest share vital characteristics (such as smoothness, or the ability to be well approximated by piece-wise linear polynomials of a low order) that allow a sparse representation in wavelet space. On the other hand results obtained from SVMs (and classical wavelet-shrinkage) suffer from a certain lack of interpretability, since one cannot straightforwardly attach probabilities to them. By contrast regression, and even more importantly classification, in a Bayesian context always entails a probabilistic measure of confidence in the results, which, provided the model assumptions are reasonably accurate, forms a basis for principled decision-making. The relevance vector machine (RVM) combines these strengths by explicitly encoding the criterion of model sparsity as a (Bayesian) prior over the model weights and offers a single, unified paradigm to efficiently deal with regression as well as classification tasks. However the lack of an explicit prior structure over the weight variances means that the degree of sparsity is to a large extent controlled by the choice of kernel (and kernel parameters). This can lead to severe overfitting or oversmoothing -- possibly even both at the same time (e.g. for the multiscale Doppler data). This thesis details an efficient scheme to control sparsity in Bayesian regression by incorporating a flexible noise-dependent smoothness prior into the RVM. The resultant smooth RVM (sRVM) encompasses the original RVM as a special case, but empirical results with a variety of popular data sets show that it can surpass RVM performance in terms of goodness of fit and achieved sparsity as well as computational performance in many cases. As the smoothness prior effectively makes it possible to use (highly efficient) wavelet kernels in an RVM setting this work also unveils a strong connection between Bayesian wavelet shrinkage and RVM regression and effectively further extends the applicability of the RVM to denoising tasks for up to millions of datapoints. We further discuss its applicability to classification tasks.
433

Automatic fish species grading using image processing and pattern recognition techniques

Strachan, N. J. C. January 1990 (has links)
Size and species grading of fish (eg on board a fishing vessel) might in future be done entirely automatically using image analysis and pattern recognition techniques. Three methods of discriminating between pictures of seven different species of fish have been compared: using invariant moments, optimisation of the mismatch, and shape descriptors. A novel method of obtaining the moments of a polygon is described. It was found that the shape descriptors gave the best results with a sorting reliability of 90&'37. Different methods of producing symmetry lines from the shape of fish have been studied in order to describe fish bending and deformations. The simple thinning algorithm was found to work best to provide a reference axis. This axis was then used as a basis for constructing a deformation independent position reference system. Using this reference system position specific colour measurements of fish could be taken. For this to be done the video digitising system was firstly calibrated in the CIELUV colour space using the Macbeth colour chart. Colour and shape measurements were then made on 18 species of demersal and 5 species of pelagic fish. The simple shape measurements of length/width and front area/back area ratios were used to do some introductory separation of the fish. Then the variables produced by the shape descriptors and colour measurements were analysed by discriminant analysis. It was found that all of the demersal fish were sorted correctly (sorting reliability of 100&'37) and all of the pelagic fish were sorted correctly except one (sorting reliability of 98&'37). A prototype machine is now being constructed based on the methods described in this work.
434

An investigation of parallel computing techniques in clinical image processing, using transputers

Byrne, John January 1992 (has links)
The objective of this work is to investigate the prospects for parallel computing in image processing applications in medicine. The tasks involved in filtered back projection reconstruction and retinal image registration are implemented on a hardware system based on the Inmos Transputer, using the OCCAM language. A number of task decomposition methods are used and their advantages and disadvantages discussed. An example which uses a pipeline illustrates the sensitivity of such an approach to uneven computational load over the network of processors. Farm networks and methods which use simple block division of data are investigated. The design of a system of interconnected processes for efficient, deadlock-free, communication within a grid network is proposed, designed and implemented. The difference in observed performance due both to task decomposition and to network communication are discussed. An image registration technique which uses the edges of image features to reduce the amount of data involved is proposed and implemented using the parallel network. The results of tests on the performance of the registration technique are presented. Registration using the edge-based technique is successful for a significant proportion of image pairs but, because of the high likelihood of registration being required for poor quality images, improvements are worthwhile. Where the registration technique failed, the human observer had equal difficulty in manually registering the same pair of images. A number of suggestions are made for further improvements of the technique. A strategy is proposed which uses task overlapping to improve the efficiency of a multi-stage parallel system of processes. The work highlights the most important factors in the application of parallel computing in image processing using a MIMD network and suggests a number of areas where further work is needed.
435

Network accessible parallel computing systems, based upon transputers, for image processing strategies

Ross, Philip January 1993 (has links)
Over the last decade there has been a steady increase in the size of primary data sets collected from medical imaging devices, and a correspondingly increased requirement for the computational power needed for associated image processing techniques. Although conventional processors have shown considerable advances throughout this period, they have failed to keep pace with the demands placed upon them by clinicians keen to utilise techniques such as pseudo three dimensional volume image presentation and high speed dynamic display of multiple frames of data. One solution that has the capability to meet these needs is to use multiple processors, co-operating to solve specified tasks using parallel processing. This thesis, which reports work undertaken during the period 1988-1991, shows how a network accessible parallel computing resource can provide an effective solution to these classes of problems. Starting from the premise that any generally accessible array of processors has to be connected to the inter-computer communication network, an Ethernet node was designed and constructed using the Inmos transputer. With this it was possible to demonstrate the benefits of parallel processing. Particular emphasis had to be given to those elements of the software which must make a guaranteed real-time response to external stimuli and it is shown that by isolating high priority processes, relatively simple OCCAM code can satisfy this need. Parallel processing principles have been utilised by the communications software that implemented the upper layers of the OSI seven layer network reference model using the Internet suite of protocols. By developing an abstract high level language, software was developed which allowed users to specify the inter-processor connection topology of a point to point connected multi-transputer array, built in association with this work. After constructing a flexible, memory efficient, graphics library, a technique to allow the high speed zooming of byte sized pixel data using a lookup table technique was developed. By using a multi-transputer design this allowed a 128x128 pixel image to be displayed at 256x256 pixel resolution at up to 25 frames per second, a requirement imposed by a contemporary cardiac imaging project.
436

Analysis of Landsat MSS data for land cover mapping of large areas

Hubbard, Neil K. January 1985 (has links)
One of the principal advantages of satellite data is the ability to provide terrain information over large areas, but past analyses of Landsat MSS data have tended to concentrate on developing techniques for small study areas. A method is developed for producing such large area land cover mapping from Landsat MSS data of Scotland. A stratified, interactive approach to image analysis produced the best results, incorporating a hybrid classification method involving a thorough selection process for training data pixels. Classification is implemented by either a minimum distance or a maximum likelihood technique which is further improved by post-classification editing and smoothing procedures. Results from a training and testing area produced a final classification statistically assessed as 87.3% correct. The method has subsequently been used to produce 3 maps of primary land cover types for Highland, Grampian and Tayside regions (total area 41,330 km2).
437

A multimodality magnetic resonance system for studying free radicals in biological systems

McCallum, Stephen John January 1997 (has links)
Free radicals are defined as molecules with one or more unpaired electrons in their outer orbits. They have been implicated in a large number of disease states and consequently there is increasing interest in detecting them <I>in vivo</I>. Having an un-cancelled electron spin, free radicals are amenable to magnetic resonance experiments. For reasons of sensitivity commercially available electron paramagnetic resonance (EPR) spectrometers operate in the X-band (9 GHz). Such frequencies are unsuitable for large biological samples because of excessive electromagnetic losses. This thesis describes the development of a radio frequency continuous wave (RFCW) EPR spectrometer operating around 280 MHz suitable for <I>in vivo</I> studies. The instrument is based around an existing low field NMR imager. The spectrometer includes both automatic frequency control and automatic coupling systems to combat the problems of animal motion. The instrument has been able to detect free radicals in living animals. PEDRI is a technique that can provide high resolution images showing free radical distribution in living systems. The method is based on conventional pulsed NMR imaging combined with dynamic nuclear polarisation The disadvantage of PEDRI is that it is difficult to obtain spectral information such as EPR line-width and g-factor. These parameters are easy to obtain by CW-EPR, and can give useful information. A further development was the combining of the RF CW-EPR instrument with a PEDRI imager to produce a multimodality instrument capable of sequential PEDRI and CW-EPR on the same sample. Switch-over between the two modes of operation takes less than 5 seconds. This instrument combines the advantages of the two types of free radical detection in a single instrument providing an extremely useful and flexible tool.
438

The pixel generator : a VLSI device for computer generated images

Evans, Steven R. January 1993 (has links)
No description available.
439

A framestore graphics system for colour map displays

Economou, D. January 1982 (has links)
No description available.
440

Digital image processing for noise reduction in medical ultrasonics

Loupas, Thanasis January 1988 (has links)
No description available.

Page generated in 0.0193 seconds