• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2656
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6261
  • 6261
  • 2005
  • 1525
  • 1195
  • 1149
  • 1028
  • 1001
  • 952
  • 927
  • 895
  • 799
  • 771
  • 661
  • 660
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

INTEGRATED CAMERAS AS A REPLACEMENT FOR VEHICULAR MIRRORS

Clark, Nicholas, Dunne, Fiona 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / Drivers’ visibility is an area of automobile safety that has seen very limited improvement over the past several decades. Limited visibility is responsible for many car accidents all across America. Mirrors require constant readjustment, and are easily blocked. There is currently a lot of interest in ways to reduce or eliminate all mirrors on a car, and one such method is through a wide-angle network of cameras mounted on the vehicle’s rear. Using real-time video processing, the data from several cameras can be spliced together, and displayed on a vehicle’s dashboard in an intuitive, easy to understand fashion that a driver can quickly see without having to turn away from the road. This has extensive application to light armored vehicles in the military, as well as to automotive designers today.
502

Foreground detection of video through the integration of novel multiple detection algorithims

Nawaz, Muhammad January 2013 (has links)
The main outcomes of this research are the design of a foreground detection algorithm, which is more accurate and less time consuming than existing algorithms. By the term accuracy we mean an exact mask (which satisfies the respective ground truth value) of the foreground object(s). Motion detection being the prior component of foreground detection process can be achieved via pixel based and block based methods, both of which have their own merits and disadvantages. Pixel based methods are efficient in terms of accuracy but a time consuming process, so cannot be recommended for real time applications. On the other hand block based motion estimation has relatively less accuracy but consumes less time and is thus ideal for real-time applications. In the first proposed algorithm, block based motion estimation technique is opted for timely execution. To overcome the issue of accuracy another morphological based technique was adopted called opening-and-closing by reconstruction, which is a pixel based operation so produces higher accuracy and requires lesser time in execution. Morphological operation opening-and-closing by reconstruction finds the maxima and minima inside the foreground object(s). Thus this novel simultaneous process compensates for the lower accuracy of block based motion estimation. To verify the efficiency of this algorithm a complex video consisting of multiple colours, and fast and slow motions at various places was selected. Based on 11 different performance measures the proposed algorithm achieved an average accuracy of more than 24.73% than four of the well-established algorithms. Background subtraction, being the most cited algorithm for foreground detection, encounters the major problem of proper threshold value at run time. For effective value of the threshold at run time in background subtraction algorithm, the primary component of the foreground detection process, motion is used, in this next proposed algorithm. For the said purpose the smooth histogram peaks and valley of the motion were analyzed, which reflects the high and slow motion areas of the moving object(s) in the given frame and generates the threshold value at run time by exploiting the values of peaks and valley. This proposed algorithm was tested using four recommended video sequences including indoor and outdoor shoots, and were compared with five high ranked algorithms. Based on the values of standard performance measures, the proposed algorithm achieved an average of more than 12.30% higher accuracy results.
503

A blackboard-based system for learning to identify images from feature data

Norman, Margaret January 1991 (has links)
A blackboard-based system which learns recognition rules for objects from a set of training examples, and then identifies and locates these objects in test images, is presented. The system is designed to use data from a feature matcher developed at R.S.R.E. Malvern which finds the best matches for a set of feature patterns in an image. The feature patterns are selected to correspond to typical object parts which occur with relatively consistent spatial relationships and are sufficient to distinguish the objects to be identified from one another. The learning element of the system develops two separate sets of rules, one to identify possible object instances and the other to attach probabilities to them. The search for possible object instances is exhaustive; its scale is not great enough for pruning to be necessary. Separate probabilities are established empirically for all combinations of features which could represent object instances. As accurate probabilities cannot be obtained from a set of preselected training examples, they are updated by feedback from the recognition process. The incorporation of rule induction and feedback into the blackboard system is achieved by treating the induced rules as data to be held on a secondary blackboard. The single recognition knowledge source effectively contains empty rules which this data can be slotted into, allowing it to be used to recognise any number of objects - there is no need to develop a separate knowledge source for each object. Additional object-specific background information to aid identification can be added by the user in the form of background checks to be carried out on candidate objects. The system has been tested using synthetic data, and successfully identified combinations of geometric shapes (squares, triangles etc.). Limited tests on photographs of vehicles travelling along a main road were also performed successfully.
504

Performance characteristics of BGO multicrystal block detectors used in positron emission tomography

Mesbahi, Mohammad Esmail January 1996 (has links)
Positron Emission Tomography (PET) serves a unique and important role in medical research because it permits non-invasive quantitative study of biological processes using radionuclides of naturally occuring elements. In the last decade, the imaging properties of PET have improved significantly because of better understanding of the design principles and introduction of novel concepts. One such development has been that of the 'block' detector system, consisting of an array of scintillation crystals coupled to a relatively small number of photomultiplier tubes (PMTs). Identification of the particular element in the block is made by comparing the outputs from the PMTs. The block provides the basic unit of the detector rings in modern PET cameras. The prototype block detector system employed in this study incorporates the CTI 831 detector module (49.47 mm wide by 53.36 mm tall by 30 mm deep). This is segmented into a matrix of 8 by 4 crystal elements, 5.62 mm(transaxially) by 12.86 mm (axially) and 30 mm (deep), and coupled to four square PMTs. The drive towards improvement of image quality in PET has prompted the development of even smaller crystals, promising 'high resolution' multiplane imaging. While these detectors have significant advantages over other detectors, the aim of this study was to investigate the physical performance of this specific block detector and to assess how its limitations will affect the information obtained from it. The system investigated offered a coincidence time resolution of 5.8 ±0.3 ns FWHM for a pair of block detectors, an individual crystal energy resolution of 19 % ±3 FWHM at 511 keV, maximum intrinsic efficiency of 45.7% ±0.5 and a column transaxial resolution of 4.2 ±0.4 mm FWHM, offering important immediate advantages. However, the drawback in the current implementation scheme is the nonuniformity across the detector face. The variation of efficiency, energy and spatial resolution for the individual detector crystals across the face of the detector block were investigated, the factors contributing to these variations were identified and suggestions for reducing their effects were made. For example inter-detector scattering was found to be a problem that leads to mispositioning of detected events. Different techniques for evaluating the amount, distribution and consequently the removal of inter-detector scattering were established. Finally these block detectors offer other possibilities like gamma-gamma coincidence imaging, attractive for adaption to neutron induced gamma ray emission tomography (NIGET). This would reduce the long scanning times presently required. However, the 'electronic collimation' provided by the coincidence detection of the two annihilation photons along the line of response between the opposing detectors is lost. Imaging of the cascade gamma rays necessitates the use of physical collimation in order to define a plane through the object. This will however reduce the absolute efficiency of the system from 2.28*10-2t 2*104to 7.3*10-6± 3"10-7w hen collimation is used on both sides but increases the spatial resolution from 5.7 ±0.2 to 2.4 ±0.2 mm. However, if a collimator is used on one side only, the spatial resolution (3.8 ±0.2 mm) obtained is comparable to that of a Ge detector with a1 mm diameter hole collimator and the absolute efficiency of the system (1.1 *10-4±3`10'5) is many times better.
505

Post formation processing of cardiac ultrasound data for enhancing image quality and diagnostic value

Perperidis, Antonios January 2011 (has links)
Cardiovascular diseases (CVDs) constitute a leading cause of death, including premature death, in the developed world. The early diagnosis and treatment of CVDs is therefore of great importance. Modern imaging modalities enable the quantification and analysis of the cardiovascular system and provide researchers and clinicians with valuable tools for the diagnosis and treatment of CVDs. In particular, echocardiography offers a number of advantages, compared to other imaging modalities, making it a prevalent tool for assessing cardiac morphology and function. However, cardiac ultrasound images can suffer from a range of artifacts reducing their image quality and diagnostic value. As a result, there is great interest in the development of processing techniques that address such limitations. This thesis introduces and quantitatively evaluates four methods that enhance clinical cardiac ultrasound data by utilising information which until now has been predominantly disregarded. All methods introduced in this thesis utilise multiple partially uncorrelated instances of a cardiac cycle in order to acquire the information required to suppress or enhance certain image features. No filtering out of information is performed at any stage throughout the processing. This constitutes the main differentiation to previous data enhancement approaches which tend to filter out information based on some static or adaptive selection criteria. The first two image enhancement methods utilise spatial averaging of partially uncorrelated data acquired through a single acoustic window. More precisely, Temporal Compounding enhances cardiac ultrasound data by averaging partially uncorrelated instances of the imaged structure acquired over a number of consecutive cardiac cycles. An extension to the notion of spatial compounding of cardiac ultrasound data is 3D-to-2D Compounding, which presents a novel image enhancement method by acquiring and compounding spatially adjacent (along the elevation plane), partially uncorrelated, 2D slices of the heart extracted as a thin angular sub-sector of a volumetric pyramid scan. Data enhancement introduced by both approaches includes the substantial suppression of tissue speckle and cavity noise. Furthermore, by averaging decorrelated instances of the same cardiac structure, both compounding methods can enhance tissue structures, which are masked out by high levels of noise and shadowing, increasing their corresponding tissue/cavity detectability. The third novel data enhancement approach, referred as Dynamic Histogram Based Intensity Mapping (DHBIM), investigates the temporal variations within image histograms of consecutive frames in order to (i) identify any unutilised/underutilised intensity levels and (ii) derive the tissue/cavity intensity threshold within the processed frame sequence. Piecewise intensity mapping is then used to enhance cardiac ultrasound data. DHBIM introduces cavity noise suppression, enhancement of tissue speckle information as well as considerable increase in tissue/cavity contrast and detectability. A data acquisition and analysis protocol for integrating the dynamic intensity mapping along with spatial compounding methods is also investigated. The linear integration of DHBIM and Temporal Compounding forms the fourth and final implemented method, which is also quantitatively assessed. By taking advantage of the benefits and compensating for the limitations of each individual method, the integrated method suppresses cavity noise and tissue speckle while enhancing tissue/cavity contrast as well as the delineation of cardiac tissue boundaries even when heavily corrupted by cardiac ultrasound artifacts. Finally, a novel protocol for the quantitative assessment of the effect of each data enhancement method on image quality and diagnostic value is employed. This enables the quantitative evaluation of each method as well as the comparison between individual methods using clinical data from 32 patients. Image quality is assessed using a range of quantitative measures such as signal-to-noise ratio, tissue/cavity contrast and detectability index. Diagnostic value is assessed through variations in the repeatability level of routine clinical measurements performed on patient cardiac ultrasound scans by two experienced echocardiographers. Commonly used clinical measures such as the wall thickness of the Interventricular Septum (IVS) and the Left Ventricle Posterior Wall (LVPW) as well as the cavity diameter of the Left Ventricle (LVID) and Left Atrium (LAD) are employed for assessing diagnostic value.
506

Texture mapping architectures for high performance image generation

Dunnett, Graham J. January 1994 (has links)
No description available.
507

Local and global interpretation of moving images

Scott, G. L. January 1986 (has links)
No description available.
508

Interpreting images of a known moving object

Hogg, D. C. January 1984 (has links)
No description available.
509

Linearisation of analogue to digital and digital to analogue converters

Dent, Alan Christopher January 1990 (has links)
No description available.
510

Digital signal processing for the analysis of fetal breathing movements

Ansourian, Megeurditch N. January 1989 (has links)
No description available.

Page generated in 0.1103 seconds