Spelling suggestions: "subject:"dealiasing"" "subject:"aliasing""
1 |
Temporal Anti-Aliasing and Temporal Supersampling in Three-Dimensional Computer Generated Dynamic Worlds / Temporal anti-vikning och temporal supersampling i tredimensionella datorgenerarade dynamiska världarStejmar, Carl January 2016 (has links)
This master thesis investigates and evaluates how a temporal component can help anti-aliasing with reduction of general spatial aliasing, preservation of thin geometry and how to get temporal stability in dynamic computer generated worlds. Of spatial aliasing, geometric aliasing is in focus but shading aliasing will also be discussed. Two temporal approaches are proposed. One of the methods utilizes the previous frame while the other method uses four previous frames. In order to do this an efficient way of re-projecting pixels are needed so this thesis deals with that problem and its consequences as well. Further, the results show that the way of taking and accumulating samples in these proposed methods show improvements that would not have been affordable without the temporal component for real-time applications. Thin geometry is preserved up to a degree but the proposed methods do not solve this problem for the general case. The temporal methods' image quality are evaluated against conventional anti-aliasing methods subjectively, by a survey, and objectively, by a numerical method not found elsewhere in anti-aliasing reports. Performance and memory consumption are also evaluated. The evaluation suggests that a temporal component for anti-aliasing can play an important role in increasing image quality and temporal stability without having a substantial negative impact of the performance with less memory consumed.
|
2 |
Zero-Crossings and Spatiotemporal Interpretation in VisionPoggio, Tomaso, Nielsen, Kenneth, Nishihara, Keith 01 May 1982 (has links)
We will briefly outline a computational theory of the first stages of human vision according to which (a) the retinal image is filtered by a set of centre-surround receptive fields (of about 5 different spatial sizes) which are approximately bandpass in spatial frequency and (b) zero-crossings are detected independently in the output of each of these channels. Zero-crossings in each channel are then a set of discrete symbols which may be used for later processing such as contour extraction and stereopsis. A formulation of Logan's zero-crossing results is proved for the case of Fourier polynomials and an extension of Logan's theorem to 2-dimentsional functions is also approved. Within this framework, we shall describe an experimental and theoretical approach (developed by one of us with M. Fahle) to the problem of visual acuity and hyperacuity of human vision. The positional accuracy achieved, for instance, in reading a vernier is astonishingly high, corresponding to a fraction of the spacing between adjacent photoreceptors in the fovea. Stroboscopic presentation of a moving object can be interpolated by our visual system into the perception of continuous motion; and this "spatio-temporal" interpolation also can be very accurate. It is suggested that the known spatiotemporal properties of the channels envisaged by the theory of visual processing outlined above implement an interpolation scheme which can explain human vernier acuity for moving targets.
|
3 |
Ultra wideband antenna array processing under spatial aliasingShapoury, Alireza 15 May 2009 (has links)
Given a certain transmission frequency, Shannon spatial sampling limit de¯nes
an upper bound for the antenna element spacing. Beyond this bound, the exceeded
ambiguity avoids correct estimation of the signal parameters (i.e., array manifold
crossing). This spacing limit is inversely proportional to the frequency of transmis-
sion. Therefore, to meet a wider spectral support, the element spacing should be
decreased. However, practical implementations of closely spaced elements result in a
detrimental increase in electromagnetic mutual couplings among the sensors. Further-
more, decreasing the spacing reduces the array angle resolution. In this dissertation,
the problem of Direction of Arrival (DOA) estimation of broadband sources is ad-
dressed when the element spacing of a Uniform Array Antenna (ULA) is inordinate.
It is illustrated that one can resolve the aliasing ambiguity by utilizing the frequency
diversity of the broadband sources. An algorithm, based on Maximum Likelihood
Estimator (MLE), is proposed to estimate the transmitted data signal and the DOA
of each source. In the sequel, a subspace-based algorithm is developed and the prob-
lem of order estimation is discussed. The adopted signaling framework assumes a
subband hopping transmission in order to resolve the problem of source associations
and system identi¯cation. The proposed algorithms relax the stringent maximum
element-spacing constraint of the arrays pertinent to the upper-bound of frequency
transmission and suggest that, under some mild constraints, the element spacing can be conveniently increased. An approximate expression for the estimation error has
also been developed to gauge the behavior of the proposed algorithms. Through con-
¯rmatory simulation, it is shown that the performance gain of the proposed setup
is potentially signi¯cant, speci¯cally when the transmitters are closely spaced and
under low Signal to Noise Ratio (SNR), which makes it applicable to license-free
communication.
|
4 |
Multitaper Methods for Time-Frequency Spectrum Estimation and Unaliasing of Harmonic FrequenciesMoghtaderi, AZADEH 05 February 2009 (has links)
This thesis is concerned with various aspects of stationary and nonstationary time series analysis. In the nonstationary case, we study estimation of the Wold-Cram'er evolutionary spectrum, which is a time-dependent analogue of the spectrum of a stationary process. Existing estimators of the Wold-Cram'er evolutionary spectrum suffer from several problems, including bias in boundary regions of the time-frequency plane, poor frequency resolution, and an inability to handle the presence of purely harmonic frequencies. We propose techniques to handle all three of these problems.
We propose a new estimator of the Wold-Cram'er evolutionary spectrum
(the BCMTFSE) which mitigates the first problem. Our estimator is based on an extrapolation of the Wold-Cram'er evolutionary spectrum in time, using an estimate of its time derivative. We apply our estimator to a set of simulated nonstationary processes with known Wold-Cram'er evolutionary spectra to demonstrate its performance.
We also propose an estimator of the Wold-Cram'er evolutionary spectrum,
valid for uniformly modulated processes (UMPs). This estimator mitigates the second problem, by exploiting the structure of UMPs to improve the frequency resolution of the BCMTFSE. We apply this estimator to a simulated UMP with known Wold-Cram'er evolutionary spectrum.
To deal with the third problem, one can detect and remove purely harmonic frequencies before applying the BCMTFSE. Doing so requires a consideration of the aliasing problem. We propose a frequency-domain technique to detect and unalias aliased frequencies in bivariate time series, based on the observation that aliasing manifests as nonlinearity in the
phase of the complex coherency between a stationary process and a time-delayed version of itself. To illustrate this ``unaliasing'' technique, we apply it to simulated data and a real-world example of solar noon flux data. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2009-02-05 10:18:13.476
|
5 |
Ab initio derivation of the cascaded lattice Boltzmann automatonGeier, Martin Christian. January 2006 (has links)
Freiburg i. Br., Univ., Diss., 2006.
|
6 |
Image processing on optimal volume sampling lattices : Thinking outside the box / Bildbehandling på optimala samplingsgitter : Att tänka utanför ramenSchold Linnér, Elisabeth January 2015 (has links)
This thesis summarizes a series of studies of how image quality is affected by the choice of sampling pattern in 3D. Our comparison includes the Cartesian cubic (CC) lattice, the body-centered cubic (BCC) lattice, and the face-centered cubic (FCC) lattice. Our studies of the lattice Brillouin zones of lattices of equal density show that, while the CC lattice is suitable for functions with elongated spectra, the FCC lattice offers the least variation in resolution with respect to direction. The BCC lattice, however, offers the highest global cutoff frequency. The difference in behavior between the BCC and FCC lattices is negligible for a natural spectrum. We also present a study of pre-aliasing errors on anisotropic versions of the CC, BCC, and FCC sampling lattices, revealing that the optimal choice of sampling lattice is highly dependent on lattice orientation and anisotropy. We suggest a new reference function for studies of aliasing errors on alternative sampling lattices. This function has a spherical spectrum, and a frequency content proportional to the distance from the origin, facilitating studies of pre-aliasing in spatial domain. The accuracy of anti-aliased Euclidean distance transform is improved by application of more sofisticated methods for computing the sub-spel precision term. We find that both accuracy and precision are higher on the BCC and FCC lattices than on the CC lattice. We compare the performance of several intensity-weighted distance transforms on MRI data, and find that the derived segmentation result, with respect to relative error in segmented volume, depends neither on the sampling lattice, nor on the sampling density. Lastly, we present LatticeLibrary, a open source C++ library for processing of sampled data, supporting a number of common image processing methods for CC, BCC, and FCC lattices. We also introduce BccFccRaycaster, a tool for visualizing data sampled on CC, BCC, and FCC lattices. We believe that the work summarized in this thesis provide both the motivation and the tools for continuing research on application of the BCC and FCC lattices in image processing and analysis.
|
7 |
Nulling the motion aftereffect with dynamic random-dot stimuli: limitations and implications.Keeble, David R.T., Castet, E., Verstraten, F. January 2002 (has links)
No / We used biased random-dot dynamic test stimuli to measure the strength of the motion aftereffect (MAE) to evaluate the usefulness of this technique as a measure of motion adaptation strength. The stimuli consisted of noise dots whose individual directions were random and of signal dots moving in a unique direction. All dots moved at the same speed. For each condition, the nulling percentage (percentage of signal dots needed to perceptually null the MAE) was scaled with respect to the coherence threshold (percentage needed to perceive the coherent motion of signal dots without prior adaptation). The increase of these scaled values with the density of dots in the test stimulus suggests that MAE strength is underestimated when measured with low densities. We show that previous reports of high nulling percentages at slow speeds do not reflect strong MAEs, but are actually due to spatio-temporal aliasing, which dramatically increases coherence thresholds. We further show that MAE strength at slow speed increases with eccentricity. These findings are consistent with the idea that using this dynamic test stimulus preferentially reveals the adaptation of a population of high-speed motion units whose activity is independent of adapted low-speed motion units.
|
8 |
THE NEXT GENERATION AIRBORNE DATA ACQUISITION SYSTEMS. PART 1 - ANTI-ALIASING FILTERS: CHOICES AND SOME LESSONS LEARNEDSweeney, Paul 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The drive towards higher accuracy and sampling rates has raised the bar for modern FTI signal
conditioning. This paper focuses on the issue of anti-alias filtering. Today's 16-bit (and greater
resolution) ADC’s, coupled with the drive for optimum sampling rates, means that filters have to be
more accurate and yet more flexible than ever before. However, in order to take full advantage of
these advances, it is important to understand the trade-offs involved and to correctly specify the
system filtering requirements.
Trade-offs focus on:
• Analog vs. Digital signal conditioning
• FIR vs. IIR Digital Filters
• Signal bandwidth vs. Sampling rate
• Coherency issues such as filter phase distortion vs. delay
This paper will discuss each of these aspects. In particular, it will focus on some of the advantages
of digital filtering various analog filter techniques. This paper will also look at some ideas for
specifying filter cut-off and characteristics.
|
9 |
A Time Correlated Approach to Adaptable Digital FilteringGrossman, Hy, Pellarin, Steve 10 1900 (has links)
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California / Signal conditioning is a critical element in all data telemetry systems. Data from all sensors
must be band limited prior to digitization and transmission to prevent the potentially
disastrous effects of aliasing. While the 6th order analog low-pass Butterworth filter has long
been the de facto standard for data channel filtering, advances in digital signal processing
techniques now provide a potentially better alternative.
This paper describes the challenges in developing a flexible approach to adaptable data
channel filtering using DSP techniques. Factors such as anti-alias filter requirements, time
correlated sampling, decimation and filter delays will be discussed. Also discussed will be
the implementation and relative merits and drawbacks of various symmetrical FIR and IIR
filters. The discussion will be presented from an intuitive and practical perspective as much
as possible.
|
10 |
DATA ACQUISITION AND THE ALIASING PHENOMENONClaflin, Ray, Jr., Claflin, Ray, III 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / In current practice sensor data is digitized and input into computers, displays, and recorders. To try to reduce the volume of digitized data, our original hypothesis was that by selecting a subset of digital values from an over-sampled signal, we could improve signal identification and improve perhaps Nyquist performance. Our investigations did not lead to significant improvements but did clarify our thinking regarding the usage of digitized data.
|
Page generated in 0.0692 seconds