• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 875
  • 201
  • 126
  • 110
  • 73
  • 25
  • 17
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1726
  • 412
  • 311
  • 245
  • 228
  • 184
  • 173
  • 166
  • 166
  • 156
  • 154
  • 152
  • 152
  • 150
  • 140
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
611

Rendering Antialiased Shadows using Warped Variance Shadow Maps

Lauritzen, Andrew Timothy January 2008 (has links)
Shadows contribute significantly to the perceived realism of an image, and provide an important depth cue. Rendering high quality, antialiased shadows efficiently is a difficult problem. To antialias shadows, it is necessary to compute partial visibilities, but computing these visibilities using existing approaches is often too slow for interactive applications. Shadow maps are a widely used technique for real-time shadow rendering. One major drawback of shadow maps is aliasing, because the shadow map data cannot be filtered in the same way as colour textures. In this thesis, I present variance shadow maps (VSMs). Variance shadow maps use a linear representation of the depth distributions in the shadow map, which enables the use of standard linear texture filtering algorithms. Thus VSMs can address the problem of shadow aliasing using the same highly-tuned mechanisms that are available for colour images. Given the mean and variance of the depth distribution, Chebyshev's inequality provides an upper bound on the fraction of a shaded fragment that is occluded, and I show that this bound often provides a good approximation to the true partial occlusion. For more difficult cases, I show that warping the depth distribution can produce multiple bounds, some tighter than others. Based on this insight, I present layered variance shadow maps, a scalable generalization of variance shadow maps that partitions the depth distribution into multiple segments. This reduces or eliminates an artifact - "light bleeding" - that can appear when using the simpler version of variance shadow maps. Additionally, I demonstrate exponential variance shadow maps, which combine moments computed from two exponentially-warped depth distributions. Using this approach, high quality results are produced at a fraction of the storage cost of layered variance shadow maps. These algorithms are easy to implement on current graphics hardware and provide efficient, scalable solutions to the problem of shadow map aliasing.
612

A Multi-scale Stochastic Filter Based Approach to Inverse Scattering for 3D Ultrasound Soft Tissue Characterization

Tsui, Patrick Pak Chuen January 2009 (has links)
The goal of this research is to achieve accurate characterization of multi-layered soft tissues in three dimensions using focused ultrasound. The characterization of the acoustic parameters of each tissue layer is formulated as recursive processes of forward- and inverse- scattering. Forward scattering deals with the modeling of focused ultrasound wave propagation in multi-layered tissues, and the computation of the focused wave amplitudes in the tissues based on the acoustic parameters of the tissue as generated by inverse scattering. The model for mapping the tissue acoustic parameters to focused waves is highly nonlinear and stochastic. In addition, solving (or inverting) the model to obtain tissue acoustic parameters is an ill-posed problem. Therefore, a nonlinear stochastic inverse scattering method is proposed such that no linearization and mathematical inversion of the model are required. Inverse scattering aims to estimate the tissue acoustic parameters based on the forward scattering model and ultrasound measurements of the tissues. A multi-scale stochastic filter (MSF) is proposed to perform inverse scattering. MSF generates a set of tissue acoustic parameters, which are then mapped into focused wave amplitudes in the multi-layered tissues by forward scattering. The tissue acoustic parameters are weighted by comparing their focused wave amplitudes to the actual ultrasound measurements. The weighted parameters are used to estimate a weighted Gaussian mixture as the posterior probability density function (PDF) of the parameters. This PDF is optimized to achieve minimum estimation error variance in the sense of the posterior Cramer-Rao bound. The optimized posterior PDF is used to produce minimum mean-square-error estimates of the tissue acoustic parameters. As a result, both the estimation error and uncertainty of the parameters are minimized. PDF optimization is formulated based on a novel multi-scale PDF analysis framework. This framework is founded based on exploiting the analogy between PDFs and analog (or digital) signals. PDFs and signals are similar in the sense that they represent characteristics of variables in their respective domains, except that there are constraints imposed on PDFs. Therefore, it is reasonable to consider a PDF as a signal that is subject to amplitude constraints, and as such apply signal processing techniques to analyze the PDF. The multi-scale PDF analysis framework is proposed to recursively decompose an arbitrary PDF from its fine to coarse scales. The recursive decompositions are designed so as to ensure that requirements such as PDF constraints, zero-phase shift and non-creation of artifacts are satisfied. The relationship between the PDFs at consecutive scales is derived in order for the PDF optimization process to recursively reconstruct the posterior PDF from its coarse to fine scales. At each scale, PDF reconstruction aims to reduce the variances of the posterior PDF Gaussian components, and as a result the confidence in the estimate is increased. The overall posterior PDF variance reduction is guided by the posterior Cramer-Rao bound. A series of experiments is conducted to investigate the performance of the proposed method on ultrasound multi-layered soft tissue characterization. Multi-layered tissue phantoms that emulate ocular components of the eye are fabricated as test subjects. Experimental results confirm that the proposed MSF inverse scattering approach is well suited for three-dimensional ultrasound tissue characterization. In addition, performance comparisons between MSF and a state-of-the-art nonlinear stochastic filter are conducted. Results show that MSF is more accurate and less computational intensive than the state-of-the-art filter.
613

ECG Noise Filtering Using Online Model-Based Bayesian Filtering Techniques

Su, Aron Wei-Hsiang January 2013 (has links)
The electrocardiogram (ECG) is a time-varying electrical signal that interprets the electrical activity of the heart. It is obtained by a non-invasive technique known as surface electromyography (EMG), used widely in hospitals. There are many clinical contexts in which ECGs are used, such as medical diagnosis, physiological therapy and arrhythmia monitoring. In medical diagnosis, medical conditions are interpreted by examining information and features in ECGs. Physiological therapy involves the control of some aspect of the physiological effort of a patient, such as the use of a pacemaker to regulate the beating of the heart. Moreover, arrhythmia monitoring involves observing and detecting life-threatening conditions, such as myocardial infarction or heart attacks, in a patient. ECG signals are usually corrupted with various types of unwanted interference such as muscle artifacts, electrode artifacts, power line noise and respiration interference, and are distorted in such a way that it can be difficult to perform medical diagnosis, physiological therapy or arrhythmia monitoring. Consequently signal processing on ECGs is required to remove noise and interference signals for successful clinical applications. Existing signal processing techniques can remove some of the noise in an ECG signal, but are typically inadequate for extraction of the weak ECG components contaminated with background noise and for retention of various subtle features in the ECG. For example, the noise from the EMG usually overlaps the fundamental ECG cardiac components in the frequency domain, in the range of 0.01 Hz to 100 Hz. Simple filters are inadequate to remove noise which overlaps with ECG cardiac components. Sameni et al. have proposed a Bayesian filtering framework to resolve these problems, and this gives results which are clearly superior to the results obtained from application of conventional signal processing methods to ECG. However, a drawback of this Bayesian filtering framework is that it must run offline, and this of course is not desirable for clinical applications such as arrhythmia monitoring and physiological therapy, both of which re- quire online operation in near real-time. To resolve this problem, in this thesis we propose a dynamical model which permits the Bayesian filtering framework to function online. The framework with the proposed dynamical model has less than 4% loss in performance compared to the previous (offline) version of the framework. The proposed dynamical model is based on theory from fixed-lag smoothing.
614

Enhancement and Visualization of VascularStructures in MRA Images Using Local Structure

Esmaeili, Morteza January 2010 (has links)
The novel method of this thesis work is based on using quadrature filters to estimate an orientation tensor and to use the advantage of tensor information to control 3D adaptive filters. The adaptive filters are applied to enhance the Magnetic Resonance Angiography (MRA) images. The tubular structures are extracted from the volume dataset by using the quadrature filters. The idea of developing adaptive filtering in this thesis work is to enhance the volume dataset and suppress the image noise. Then the output of the adaptive filtering can be a clean dataset for segmentation of blood vessel structures to get appropriate volume visualization. The local tensors are used to create the control tensor which is used to control adaptive filters. By evaluation of the tensor eigenvalues combination, the local structures like tubular structures and stenosis structures are extracted from the dataset. The method has been evaluated with synthetic objects, which are vessel models (for segmentation), and onion like synthetic object (for enhancement). The experimental results are shown on clinical images to validate the proposed method as well.
615

Enhancement of X-ray Fluoroscopy Image Sequences using Temporal Recursive Filtering and Motion Compensation

Forsberg, Anni January 2006 (has links)
This thesis consider enhancement of X-ray fluoroscopy image sequences. The purpose is to investigate the possibilities to improve the image enhancement in Biplanar 500, a fluoroscopy system developed by Swemac Medical Appliances, for use in orthopedic surgery. An algorithm based on recursive filtering, for temporal noise suppression, and motion compensation, for avoidance of motion artifacts, is developed and tested on image sequences from the system. The motion compensation is done both globally, by using the theory of the shift theorem, and locally, by subtracting consecutive frames. Also a new type of contrast adjustment is presented, received with an unlinear mapping function. The result is a noise reduced image sequence that shows no blurring effects upon motion. A brief study of the result shows, that both the image sequences with this algorithm applied and the contrast adjusted images are preferred by orthopedists compared to the present images in the system.
616

Design of Fast Multidimensional Filters by Genetic Algorithms

Langer, Max January 2004 (has links)
The need for fast multidimensional signal processing arises in many areas. One of the more demanding applications is real time visualization of medical data acquired with e.g. magnetic resonance imaging where large amounts of data can be generated. This data has to be reduced to relevant clinical information, either by image reconstruction and enhancement or automatic feature extraction. Design of fast-acting multidimensional filters has been subject to research during the last three decades. Usually methods for fast filtering are based on applying a sequence of filters of lower dimensionality acquired by e.g. weighted low-rank approximation. Filter networks is a method to design fast multidimensional filters by decomposing multiple filters into simpler filter components in which coefficients are allowed to be sparsely scattered. Up until now, coefficient placement has been done by hand, a procedure which is time-consuming and difficult. The aim of this thesis is to investigate whether genetic algorithms can be used to place coefficients in filter networks. A method is developed and tested on 2-D filters and the resulting filters have lower distortion values while still maintaining the same or lower number of coefficients than filters designed with previously known methods.
617

Implementation and Performance Analysis of Filternets

Einarsson, Henrik January 2006 (has links)
Today Image acquisition equipment produces huge amounts of data that needs to be processed. Often the data describes signals with a dimensionality higher then 2, as with ordinary images. This introduce a problem when it comes to process this high dimensional data since ordinary signal processing tools are no longer suitable. New faster and more efficient tools need to be developed to fully exploit the advantages with e. g. a 3D CT-scan. One such tool is filternets, a layered networklike structure, which the signal propagates through. A filternet has three fundamental advantages which will decrease the filtering time. The network structure allows complex filter to be decomposed into simpler ones, intermediate result may be reused and filters may be implemented with very few nonzero coefficients (sparse filters). The aim of this study has been to create an implementation for filternets and optimize it with respect to execution time. Specially the possibility to use filternets that approximates a harmonic filterset for estimating orientation in 3D signals is investigated. Tests show that this method is up to about 30 times faster than a full filterset consisting of dense filters. They also show a slightly larger error in the estimated orientation compared with the dense filters, this error should however not limit the usability of the method.
618

Optimisation of a Diagnostic Test for a Truck Engine / Optimering av ett diagnostest för en lastbilsmotor

Haraldsson, Petter January 2002 (has links)
Diagnostic systems become more and more an important within the field of vehicle systems. This is much because new rules and regulation forcing the manufacturer of heavy duty trucks to survey the emission process in its engines during the whole lifetime of the truck. To do this a diagnostic system has to be implemented which always survey the process and check that the thresholds of the emissions set by the government not are exceeded. There is also a demand that this system should be reliable, i.e. not producing false alarms or missed detection. One way of producing such a system is to use model based diagnosis system where thresholds has to be set deciding if the system is corrupt or not. There is a lot of difficulties involved in this. Firstly, there is no way of knowing if the signals logged are corrupt or not. This is because faults in these signals should be detected. Secondly, because of strict demand of reliability the thresholds has to be set where there is very low probability of finding values while driving. In this thesis a methodology is proposed for setting thresholds in a diagnosis system in an experimental test engine at Scania. Measurement data has been logged over 20 hours of effective driving by two individuals of the same engine. It is shown that the result is improved significantly by using this method and the threshold can be set so smaller faults in the system reliably can be detected.
619

Studie av integration mellan rategyron och magnetkompass / Study of sensor fusion of rategyros and magnetometers

Nilsson, Sara January 2004 (has links)
This master thesis is a study on how a rategyro triad, an accelerometer triad, and a magnetometer triad can be integrated into a navigation system, estimating a vehicle’s attitude, i.e. its roll, tipp, and heading angles. When only a rategyro triad is used to estimate a vehicle’s attitude, a drift in the attitude occurs due to sensor errors. When an accelerometer triad and a magnetometer triad are used, an error in the vehicle’s heading, appearing as a sine curve, depending on the heading, occurs. By integrating these sensor triads, the sensor errors have been estimated with a filter to improve the estimated attitude’s accuracy. To investigate and evaluate the navigation system, a simulation model has been developed in Simulink/Matlab. The implementation has been made using a Kalman filter where the sensor fusion takes place. Simulations for different scenarios have been made and the results from these simulations show that the drift in the vehicle’s attitude is avoided.
620

Robust Automotive Positioning: Integration of GPS and Relative Motion Sensors / Robust fordonspositionering: Integration av GPS och sensorer för relativ rörelse

Kronander, Jon January 2004 (has links)
Automotive positioning systems relying exclusively on the input from a GPS receiver, which is a line of sight sensor, tend to be sensitive to situations with limited sky visibility. Such situations include: urban environments with tall buildings; inside parking structures; underneath trees; in tunnels and under bridges. In these situations, the system has to rely on integration of relative motion sensors to estimate vehicle position. However, these sensor measurements are generally affected by errors such as offsets and scale factors, that will cause the resulting position accuracy to deteriorate rapidly once GPS input is lost. The approach in this thesis is to use a GPS receiver in combination with low cost sensor equipment to produce a robust positioning module. The module should be capable of handling situations where GPS input is corrupted or unavailable. The working principle is to calibrate the relative motion sensors when GPS is available to improve the accuracy during GPS intermission. To fuse the GPS information with the sensor outputs, different models have been proposed and evaluated on real data sets. These models tend to be nonlinear, and have therefore been processed in an Extended Kalman Filter structure. Experiments show that the proposed solutions can compensate for most of the errors associated with the relative motion sensors, and that the resulting positioning accuracy is improved accordingly.

Page generated in 0.0617 seconds