• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 538
  • 129
  • 106
  • 90
  • 27
  • 27
  • 27
  • 27
  • 27
  • 27
  • 19
  • 11
  • 8
  • 5
  • 2
  • Tagged with
  • 1024
  • 1024
  • 942
  • 629
  • 338
  • 303
  • 239
  • 143
  • 131
  • 112
  • 99
  • 96
  • 96
  • 96
  • 91
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

An optimization-based approach for cost-effective embedded DSP system design

DeBardelaben, James Anthony 05 1900 (has links)
No description available.
192

Wavelet analysis and classification surface electromyography signals

Kilby, Jeff Unknown Date (has links)
A range of signal processing techniques have been adopted and developed as a methodology which can be used in developing an intelligent surface electromyography (SEMG) signal classifier. An intelligent SEMG signal classifier would be used for recognising and treatment of musculoskeletal pain and some neurological disorders by physiotherapists and occupational therapists. SEMG signals displays the electrical activity from a skeletal muscle which is detected by placing surface electrodes placed on the skin over the muscle. The key factors of this research were the investigation into digital signal processing using various analysis schemes and the use of the Artificial Neural Network (ANN) for signal classification of normal muscle activity. The analysis schemes explored for the feature extraction of the signals were the Fast Fourier Transform (FFT), Short Time Fourier Transform (STFT), Continuous Wavelet Transform (CWT), Discrete Wavelet Transform (DWT) and Discrete Wavelet Packet Transform (DWPT).Traditional analysis methods such as FFT could not be used alone, because muscle diagnosis requires time-based information. CWT, which was selected as the most suitable for this research, includes time-based information as well as scales, and can be converted into frequencies, making muscle diagnosis easier. CWT produces a scalogram plot along with its corresponding frequency-time based spectrum plot. Using both of these plots, overviewed extracted features of the dominant frequencies and the related scales can be selected for inputs to train and validate an ANN. The purpose of this research is to classify (SEMG) signals for normal muscle activity using different extracted features in an ANN. The extracted features of the SEMG signals used in this research using CWT were the mean and median frequencies of the average power spectrum and the RMS values at scales 8, 16, 32, 64 and 128. SEMG signals were obtained for a 10 second period, sampled at 2048 Hz and digitally filtered using a Butterworth band pass filter (5 to 500 Hz, 4th order). They were collected from normal vastus lateralis and vastus medialis muscles of both legs from 45 male subjects at 25%, 50%, and 75% of their Maximum Voluntary Isometric Contraction (MVIC) force of the quadriceps. The ANN is a computer program which acts like brain neurons, recognises, learns data and produces a model of that data. The model of that data becomes the target output of an ANN. Using the first 35 male subjects' data sets of extracted features, the ANN was trained and then validated with the last 10 male subjects' data sets of the untrained extracted features. The results showed how accurate the untrained data were classified as normal muscle activity. This methodology of using CWT for extracting features for analysing and classifying by an ANN for SEMG signals has shown to be sound and successful for the basis implementation in developing an intelligent SEMG signal classifier.
193

Hierarchical segmentation of mammograms based on pixel intensity

Masek, Martin January 2004 (has links)
Mammography is currently used to screen women in targeted risk classes for breast cancer. Computer assisted diagnosis of mammograms attempts to lower the workload on radiologists by either automating some of their tasks or acting as a second reader. The task of mammogram segmentation based on pixel intensity is addressed in this thesis. The mammographic process leads to images where intensity in the image is related to the composition of tissue in the breast; it is therefore possible to segment a mammogram into several regions using a combination of global thresholds, local thresholds and higher-level information based on the intensity histogram. A hierarchical view is taken of the segmentation process, with a series of steps that feed into each other. Methods are presented for segmentation of: 1. image background regions; 2. skin-air interface; 3. pectoral muscle; and 4. segmentation of the database by classification of mammograms into tissue types and determining a similarity measure between mammograms. All methods are automatic. After a detailed analysis of minimum cross-entropy thresholding, multi-level thresholding is used to segment the main breast tissue from the background. Scanning artefacts and high intensity noise are separated from the breast tissue using binary image operations, rectangular labels are identified from the binary image by their shape, the Radon transform is used to locate the edges of tape artefacts, and a filter is used to locate vertical running roller scratching. Orientation of the image is determined using the shape of the breast and properties of the breast tissue near the breast edge. Unlike most existing orientation algorithms, which only distinguish between left facing or right facing breasts, the algorithm developed determines orientation for images flipped upside down or rotated onto their side and works successfully on all images of the testing database. Orientation is an integral part of the segmentation process, as skin-air interface and pectoral muscle extraction rely on it. A novel way to view the skin-line on the mammogram is as two sets of functions, one set with the x-axis along the rows, and the other with the x-axis along the columns. Using this view, a local thresholding algorithm, and a more sophisticated optimisation based algorithm are presented. Using fitted polynomials along the skin-air interface, the error between polynomial and breast boundary extracted by a threshold is minimised by optimising the threshold and the degree of the polynomial. The final fitted line exhibits the inherent smoothness of the polynomial and provides a more accurate estimate of the skin-line when compared to another established technique. The edge of the pectoral muscle is a boundary between two relatively homogenous regions. A new algorithm is developed to obtain a threshold to separate adjacent regions distinguishable by intensity. Taking several local windows containing different proportions of the two regions, the threshold is found by examining the behaviour of either the median intensity or a modified cross-entropy intensity as the proportion changes. Image orientation is used to anchor the window corner in the pectoral muscle corner of the image and straight-line fitting is used to generate a more accurate result from the final threshold. An algorithm is also presented to evaluate the accuracy of different pectoral edge estimates. Identification of the image background and the pectoral muscle allows the breast tissue to be isolated in the mammogram. The density and pattern of the breast tissue is correlated with 1. Breast cancer risk, and 2. Difficulty of reading for the radiologist. Computerised density assessment methods have in the past been feature-based, a number of features extracted from the tissue or its histogram and used as input into a classifier. Here, histogram distance measures have been used to classify mammograms into density types, and ii also to order the image database according to image similarity. The advantage of histogram distance measures is that they are less reliant on the accuracy of segmentation and the quality of extracted features, as the whole histogram is used to determine distance, rather than quantifying it into a set of features. Existing histogram distance measures have been applied, and a new histogram distance presented, showing higher accuracy than other such measures, and also better performance than an established feature-based technique.
194

Digitising photographic negatives and prints for preservation

Carstens, Andries Theunis January 2013 (has links)
A DISSERTATION PRESENTED TO THE FACULTY OF INFORMATICS AND DESIGN OF THE CAPE PENINSULA UNIVERSITY OF TECHNOLOGY IN FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MAGISTER TECHNOLOGIAE PHOTOGRAPHY CAPE PENINSULA UNIVERSITY OF TECHNOLOGY 2013 / This study deals with the pitfalls and standards associated with the digitisation of photographic artefacts in formal collections. The popularity of the digital medium caused a rapid increase in the demand for converting images into digital files. The need for equipment capable of executing the task successfully, the pressure on collection managers to display their collections to the world and the demand for knowledge needed by managers and operators created pressure to perform optimally and often in great haste. As a result of the rush to create digital image files to be displayed and to be preserved, the decisions that are being made may be questionable. The best choice of file formats for longevity, setting and maintaining standards to guarantee quality digital files and consultation with experts in the field of digitisation as well as attention to best practices are important aspects which must be considered. In order to determine the state of affairs in countries with an advanced knowledge and experience in the field of digitisation, a comprehensive literature study was done. It was found that enough information exists to enable collection managers in South Africa to make well informed decisions to ensure a high quality of digital collection. By means of questionnaires, a survey was undertaken amongst selected Western Cape image preservation institutions to determine the level of knowledge of the managers who are required to make informed decisions. The questionnaire was designed to give insight into choices being made regarding the technical quality, workflow and best practice aspects of digitisation. Comparing the outcome of the questionnaires with best practices and recommended standards in countries with an advanced level of experience it was found that not enough of this experience and knowledge is used by local collection managers although readily available. In some cases standards are disregarded completely. The study also investigated by means of questionnaires the perception of the digital preservation of image files by fulltime photographic students and volunteer members of the Photographic Society of South Africa. It was found that uncertainty exist within both groups with regard to file longevity and access to files in five to ten year's time. Digitisation standards are set and maintained by the use of specially designed targets which enable digitising managers to maintain control over the quality of the digital content as well as monitoring of equipment performance. The use of these targets to set standards were investigated and found to be an accurate and easy method of maintaining control over the standard and quality of digital files. Suppliers of digitising equipment very often market their equipment as being of a high quality and being able to fulfil the required digitisation tasks. Testing selected digitising equipment by means of specially designed targets proved however that potential buyers of equipment in the high cost range should be very cautious about suppliers' claims without proof of performance. Using targets to verify performance should be a routine check before any purchase. The study concludes with recommendations of implementing standards and it points to potential future research.
195

Classification of wheat kernels by machine-vision measurement

Schmalzried, Terry Eugene. January 1985 (has links)
Call number: LD2668 .T4 1985 S334 / Master of Science
196

Digital image noise smoothing using high frequency information

Jarrett, David Ward, 1963- January 1987 (has links)
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.
197

Monitoring of froth systems using principal component analysis

Kharva, Mohamed 04 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2002. / ENGLISH ABSTRACT: Flotation is notorious for its susceptibility to process upsets and consequently its poor performance, making successful flotation control systems an elusive goal. The control of industrial flotation plants is often based en the visual appearance of the froth phase, and depends to a large extent on the experience and ability of a human operator. Machine vision systems provide a novel solution to several of the problems encountered in conventional flotation systems for monitoring and control. The rapid development in computer VISIon, computational resources and artificial intelligence and the integration of these technologies are creating new possibilities in the design and implementation of commercial machine vision systems for the monitoring and control of flotation plants. Current machine vision systems are available but not without their shortcomings. These systems cannot deal with fine froths where the bubbles are very small due to the segmentation techniques employed by them. These segmentation techniques are cumbersome and computationally expensive making them slow in real time operation. The approach followed in this work uses neural networks to solve the problems mentioned above. Neural networks are able to extract information from images of the froth phase without regard to the type and structure of the froth. The parallel processing capability of neural networks, ease of implementation and the advantages of supervised or unsupervised training of neural networks make them potentially suited for real-time industrial machine vision systems. In principle, neural network models can be implemented in an adaptive manner, so that changes in the characteristics of processes are taken into account. This work documents the development of linear and non-linear principal component models, which can be used in a real-time machine vision system for the monitoring, and control of froth flotation systems. Features from froth images of flotation processes were extracted via linear and non-linear principal component analysis. Conventional linear principal component analysis and three layer autoassociative neural networks were used in the extraction of linear principal components from froth images. Non-linear principal components were extracted from froth images by a three and five layer autoassociative neural network, as well as localised principal component analysis based on k-means clustering. Three principal components were extracted for each image. The correlation coefficient was used as a measure of the amount of variance captured by each principal component. The principal components were used to classify the froth images. A probabilistic neural network and a feedforward neural network classifier were developed for the classification of the froth images. Multivariate statistical process control models were developed using the linear and non-linear principal component models. Hotellings T2 statistic and the squared prediction error based on linear and non-linear principal component models were used in the development of multivariate control charts. It was found that the first three features extracted with autoassociative neural networks were able to capture more variance in froth images than conventional linear principal components, the features extracted by the five layer autoassociative neural networks were able to classify froth images more accurately than features extracted by conventional linear principal component analysis and three layer autoassociative neural networks. As applied, localised principal component analysis proved to be ineffective, owing to difficulties with the clustering of the high dimensional image data. Finally the use of multivariate statistical process control models to detect deviations from normal plant operations are discussed and it is shown that Hotellings T2 and squared prediction error control charts are able to clearly identify non-conforming plant behaviour. / AFRIKAANSE OPSOMMING: Flottasie is berug daarvoor dat dit vatbaar vir prosesversteurings is en daarom dikwels nie na wense presteer nie. Suksesvolle flottasiebeheerstelsels bly steeds 'n ontwykende doelwit. Die beheer van nywerheidsflottasie-aanlegte word dikwels gebaseer op die visuele voorkoms van die skuimfase en hang tot 'n groot mate af van die ervaring en vaardighede van die menslike operateur. Masjienvisiestelsels voorsien 'n vindingryke oplossing tot verskeie van die probleme wat voorkom by konvensionele flottasiestelsels ten opsigte van monitering en beheer. Die vinnige ontwikkeling van rekenaarbeheerde visie, rekenaarverwante hulpbronne en kunsmatige intelligensie, asook die integrasie van hierdie tegnologieë, skep nuwe moontlikhede in die ontwerp en inwerkingstelling van kommersiële masjienvisiestelsels om flottasie-aanlegte te monitor en te beheer. Huidige masjienvisiestelsels is wel beskikbaar, maar is nie sonder tekortkominge nie. Hierdie stelsels kan nie fyn skuim hanteer nie, waar die borreltjies baie klein is as gevolg van die segmentasietegnieke wat hulle aanwend. Hierdie segmentasietegnieke is omslagtig en rekenaargesproke duur, wat veroorsaak dat dit stadig in reële tyd-aanwendings is. Die benadering wat in hierdie werk gevolg is, wend neurale netwerke aan om die bovermelde probleme op te los. Neurale netwerke is instaat om inligting te onttrek uit beelde van die skuimfase sonder om ag te slaan op die tipe en struktuur van die skuim. Die parallelle prosesseringsvermoëns van neurale netwerke, die gemak van implementering en die voordele van die opleiding van neurale netwerke met of sonder toesig maak hulle potensieel nuttig as reële tydverwante industriële masjienvisiestelsels. In beginsel kan neurale netwerke op 'n aanpassende wyse geïmplementeer word, sodat veranderinge in die kenmerke van die prosesse deurlopend in aanmerking geneem word. Kenmerke van die beelde van die skuim tydens die flottasieproses is verkry by wyse van lineêre en nie-lineêre hootkomponentsanalise. Konvensionele lineêre hoofkomponentsanalise en drie-laag outo-assosiatiewe neurale netwerke is gebruik in die onttrekking van lineêre hoofkomponente uit die beelde van die skuim. Nie-lineêre hoofkomponente is uit die beelde van die skuim onttrek by wyse van 'n drie- en vyf-laag outo-assosiatiewe neurale netwerk, asook deur 'n gelokaliseerde hoofkomponentsanalise wat op k-gemiddelde trosanalise gebaseer is. Drie hoofkomponente is vir elke beeld onttrek. Die korrelasiekoëffisiënt is gebruik as 'n maatstaf van die afwyking wat deur elke hoofkomponent aangetoon is. Die hoofkomponente is gebruik om die beelde van die skuim te klassifiseer. 'n Probalistiese neurale netwerk en 'n voorwaarts voerende neurale netwerk is vir die klassifisering van die beelde van die skuim ontwerp. Multiveranderlike statistiese prosesbeheermodelle is ontwerp met die gebruik van die lineêre en nie-lineêre hoofkomponentmodelle. Hotelling se T2 statistiek en die gekwadreerde voorspellingsfout, gebaseer op lineêre en nie-lineêre hoofkomponentmodelle, is gebruik in die ontwikkeling van multiveranderlike kontrolekaarte. Dit is gevind dat die eerste drie eienskappe wat met behulp van die outo-assosiatiewe neurale netwerke onttrek is, instaat was om meer variansie by beelde van skuim vas te vang as konvensionele lineêre hoofkomponente. Die eienskappe wat deur die vyf-laag outo-assosiatiewe neurale netwerke onttrek is, was instaat om beelde van skuim akkurater te klassifiseer as daardie eienskappe wat by wyse van konvensionele lineêre hoofkomponentanalalise en drie-laag outo-assosiatiewe neurale netwerke onttrek is. Soos toegepas, het dit geblyk dat gelokaliseerde hoofkomponentsanalise nie effektief is nie, as gevolg van die probleme rondom die trosanalise van die hoë-dimensionele beelddata. Laastens word die aanwending van multiveranderlike statistiese prosesbeheermodelle, om afwykings in normale aanlegoperasies op te spoor, bespreek. Dit word aangetoon dat Hotelling se T2 statistiek en gekwadreerdevoorspellingsfoutbeheerkaarte instaat is om afwykende aanlegwerksverrigting duidelik aan te dui.
198

Characterization and evaluation of a photostimulable phosphor x ray imaging system.

Yocky, David Alan. January 1988 (has links)
This dissertation presents the characterization and evaluation of a new radiological imaging modality, Toshiba Computed Radiography (TCR) 201. The characteristics of the TCR storage phosphor imaging plates such as energy-dependent x-ray quantum efficiency, stored signal decay, low exposure rate signal build-up, and spontaneous and stimulated gain measures are presented. The TCR 201 system is characterized by the signal transfer curve, the total root-mean-squared (rms) output noise, the signal-to-noise ratio, the modulation transfer function (MTF), its noise power spectrum (NPS), and the detective quantum efficiency (DQE). The system rms noise is photon-limited for exposures less than 1.0 mR, but has contributions from phosphor structure and quantization noise for exposures higher than 1.0 mR. The phosphor's information factor is shown to explain deviations from ideal photon-limited noise for exposures of less than 1.0 mR. The MTF of the system is measured for standard imaging plates, 10% at 2.8 lp/mm, and for high resolution imaging plates, 10% at 4.4 lp/mm. An expression for the NPS is statistically derived, and experimental measurements confirm the expression and show an increase in uncorrelated noise power above 1.0 mR which is consistent with rms measurements. Expressions for the DQE are presented. A psychophysical study is performed to directly compare the TCR to film/screen combinations in imaging low-contrast objects. The results of this study show the TCR provides better images for detectability as a function of exposure. Also, the use of the TCR 201 as a two dimensional dosimeter and in single-shot dual energy subtraction is presented.
199

A model-continuous specification and design methodology for embedded multiprocessor signal processing systems

Janka, Randall Scott 12 1900 (has links)
Thesis made openly available per email from author, August 2015.
200

Particle size and shape analysis of coarse aggregate using digital image processing

Mora, Carlos F. January 2000 (has links)
published_or_final_version / Civil Engineering / Doctoral / Doctor of Philosophy

Page generated in 0.0588 seconds