Spelling suggestions: "subject:"correction"" "subject:"eorrection""
271 |
Improving Design Strategies for Composite Pavement Overlay: Multi-layered Elastic Approach and Reliability Based ModelsSigdel, Pawan January 2016 (has links)
No description available.
|
272 |
Detection and Correction of Global Positioning System Carrier Phase Measurement AnomaliesAchanta, Raghavendra 14 July 2004 (has links)
No description available.
|
273 |
Mean curvature mapping: application in laser refractive surgeryTang, Maolong 12 October 2004 (has links)
No description available.
|
274 |
MRI-Based Attenuation Correction for PET ReconstructionSteinberg, Jeffrey 12 September 2008 (has links)
No description available.
|
275 |
Manipulation Of Cognitive Biases And Rumination: An Examination Of Single And Combined Correction ConditionsAdler, Abby Danielle 12 September 2008 (has links)
No description available.
|
276 |
The Effects of the Copy, Cover, and Compare Strategy on the Acquisition, Maintenance, and Generalization of Spelling Sight Words for Elementary Students with DisabilitiesMoser, Lauren Ashley 16 September 2009 (has links)
No description available.
|
277 |
Characterization of Center-of-Mass and Rebinning in Positron Emission Tomography with Motion / Karaktärisering av masscentrum och händelseuppdatering i positronemissionstomografi med rörelseHugo, Linder January 2021 (has links)
Medical molecular imaging with positron emission tomography (PET) is sensitive to patient motion since PET scans last several minutes. Despite advancements in PET, such as improved photon-pair time-of-flight (TOF) difference resolution, motion deformations limit image resolution and quantification. Previous research of head motion tracking has produced the data-driven centroid-of-distribution (COD) algorithm. COD generates a 3D center-of-mass (COM) over time via raw list-mode PET data, which can guide motion correction such as gating and event rebinning in non-TOF PET. Knowledge gaps: COD could potentially benefit from sinogram corrections used in image reconstruction, while rebinning has not extended to TOF PET. Methods: This study develops COD with event mass (incorporating random correction and line-of-response (LOR) normalization) and a simplistic TOF rebinner. In scans of phantoms and moving heads with F11 flouro-deoxy-glucose (FDG) tracer, COD alternatives are evaluated with a signal-to-noise ratio (SNR) via linear fit to image COM, while rebinning is evaluated with mean squared error (MSE). Results: COD SNR did not benefit from a corrected event mass. The prototype TOF rebinning reduced MSE, although there were discretization errors and event loss at extreme bins for LOR and TOF due to the simplistic design, which introduced image artifacts. In conclusion, corrected event mass in COD is not promising, while TOF rebinning appears viable if techniques from state-of-the-art LOR rebinning are incorporated.
|
278 |
Attenuation Correction in Positron Emission Tomography Using Single Photon Transmission MeasurementDekemp, Robert A. 09 1900 (has links)
Accurate attenuation correction is essential for quantitative positron emission
tomography. Typically, this correction is based on a coincidence transmission
measurement using an external source of positron emitter, which is positioned close to
the detectors. This technique suffers from poor statistical quality and high dead time
losses, especially with a high transmission source strength.
We have proposed and tested the use of single photon transmission measurement
with a rotating rod source, to measure the attenuation correction factors (ACFs). The
singles projections are resampled into the coincidence geometry using the detector
positions and the r,)d source location. A nonparalyzable dead time correction algorithm
was developed for the block detectors used in the McMaster PET scanner.
Transaxial resolution is approximately 6 mm, which is comparable to emission
scanning performance. Axial resolution is about 25 mm, with only crude source
collimation. ACFs are underestimated by approximately 10% due to increased crossplane
scatter, compared to coincidence transmission scanning. Effective source
collimation is necessary to obtain suitable axial resolution and improved accuracy. The
response of the correction factors to object density is linear to within 15%, when
comparing singles transmission measurement to current coincidence transmission
measurement.
The major advantage of using singles transmission measurement IS a
dramatically increased count rate. A factor of seven increase in count rate over
coincidence scanning is possible with a 2 mCi transmission rod source. There are no
randoms counted in singles transmission scans, which makes the measured count rate
nearly linearly proportional with source activity. Singles detector dead time is
approximately 6% in the detectors opposite a 2 mCi rod source.
Present hardware and software precludes the application of this technique in a
clinical environment. We anticipate that real time acquisition of detector singles can
reduce the transmission scanning time to under 2 minutes, and produce attenuation
coefficient images with under 2% noise. This is a significant improvement compared
to the current coincidence transmission technique. / Thesis / Master of Science (MS)
|
279 |
Energy-efficient custom integrated circuit design of universal decoders using noise-centric GRAND algorithmsRiaz, Arslan 24 May 2024 (has links)
Whenever data is stored or transmitted, it inevitably encounters noise that can lead to harmful corruption. The communication technologies rely on decoding the data using Error Correcting Codes (ECC) that enable the rectification of noise to retrieve the original message. Maximum Likelihood (ML) decoding has proven to be optimally accurate, but it has not been adopted due to the lack of a feasible implementation arising from its computational complexity. It has been established that ML decoding of arbitrary linear codes is a Nondeterministic Polynomial-time (NP) hard problem. As a result, many code-specific decoders have been developed as an approximation of an ML decoder. This code-centric decoding approach leads to a hardware implementation that tightly couples with a specific code structure. Recently proposed Guessing Random Additive Noise Decoding (GRAND) offers a solution by establishing a noise-centric decoding approach, thereby making it a universal ML decoder. Both the soft-detection and hard-detection variants of GRAND have shown to be capacity achieving for any moderate redundancy arbitrary code.
This thesis claims that GRAND can be efficiently implemented in hardware with low complexity while offering significantly higher energy efficiency than state-of-the-art code-centric decoders. In addition to being hardware-friendly, GRAND offers high parallelizability that can be chosen according to the throughput requirement making it flexible for a wide range of applications. To support this claim, this thesis presents custom-designed energy-efficient integrated circuits and hardware architectures for the family of GRAND algorithms. The universality of the algorithm is demonstrated through measurements across various codebooks for different channel conditions. Furthermore, we employ the noise recycling technique in both hard-detection and soft-detection scenarios to improve the decoding by exploiting the temporal noise correlations. Using the fabricated chips, we demonstrate that employing noise recycling with GRAND significantly reduces energy and latency, while providing additional gains in decoding performance.
Efficient integrated architectures of GRAND will significantly reduce the hardware complexity while future-proofing a device so that it can decode any forthcoming code. The noise-centric decoding approach overcomes the need for code standardization making it adaptable for a wide range of applications. A single GRAND chip can replace all existing decoders, offering competitive decoding performance while also providing significantly higher energy and area efficiency. / 2026-05-23T00:00:00Z
|
280 |
Wideband Digital Filter-and-Sum Beamforming with Simultaneous Correction of Dispersive Cable and Antenna EffectsLiu, Qian 30 May 2012 (has links)
Optimum filter-and-sum beamforming is useful for array systems that suffer from spatially correlated noise and interference over large bandwidth. The set of finite impulse response (FIR) filter coefficients used to implement the optimum filter-and-sum beamformer are selected to optimize signal-to-noise ratio (SNR) and reduce interference from the certain directions. However, these array systems may also be vulnerable to dispersion caused by physical components such as antennas and cables, especially when the dispersion is unequal between sensors. The unequal responses can be equalized by using FIR filters. Although the problems of optimum-SNR beamforming, interference mitigation, and per-sensor dispersion have previously been individually investigated, their combined effects and strategies for mitigating their combined effects do not seem to have been considered.
In this dissertation, combination strategies for optimum filter-and-sum beamforming and sensor dispersion correction are investigated. Our objective is to simultaneously implement optimum filter-and-sum beamforming and per-sensor dispersion correction using a single FIR filter per sensor. A contribution is to reduce overall filter length, possibly also resulting in a significant reduction in implementation complexity, power consumption, and cost.
Expressions for optimum filter-and-sum beamforming weights and per-sensor dedispersion filter coefficients are derived. One solution is found via minimax optimization. To assess feasibility, the cost is analyzed in terms of filter length. These designs are considered in the context of LWA1, the first ``station'' of the Long Wavelength Array (LWA) radio telescope, consisting of 512 bowtie-type antennas and operating at frequencies between 10 MHz and 88 MHz. However, this work is applicable to a variety of systems which suffer from non-white spatial noise and directional interference and are vulnerable to sensor dispersion; e.g., sonar arrays, HF/VHF-band riometers, radar arrays, and other radio telescopes. / Ph. D.
|
Page generated in 0.0822 seconds