Spelling suggestions: "subject:"oversampling""
1 |
Software Defined Radio for Maritime Collision Avoidance ApplicationsHumphris, Les January 2015 (has links)
The design and development of a software defined radio (SDR) receiver prototype has been completed. The goal is to replace the existing automatic identification system (AIS) manufactured by Vesper Marine with a software driven system that reduces costs and provides a high
degree of reconfigurability. One of the key concepts of the SDR is the consideration of directly digitizing the radio frequency (RF) signal using subsampling. This idea arises from the ambition to implement an analog-to-digital converter (ADC) as close to the antenna interface as
practically possible. Thus, majority of the RF processing is encapsulated within in the digital domain. Evaluation of a frequency planning strategy that utilizes a combination of subsampling and oversampling will illustrate how the maritime bandwidth is aliased to a lower frequency. An analog front-end (AFE) board was constructed to implement the frequency planning strategy so that the digitized bandwidth can be streamed into a field programmable gate array (FPGA)
for real-time processing. Research is shown on digital front-end (DFE) techniques that condition the digitized maritime signal for baseband processing. The process of a digital down converter (DDC) is conducted by an FPGA, which acquired the in-phase and quadrature signals. By implementing a digital signal processor (DSP) for baseband processing, demodulation on an AIS test signal is evaluated. The SDR prototype achieved a receiver sensitivity of -113dBm, outperforming the required sensitivity of -107dBm specified in the International Electrotechnical Commission (IEC) 62287-1 standard for AIS applications [1].
|
2 |
Undersampling to accelerate time-resolved MRI velocity measurement of carotid blood flowTao, Yuehui January 2009 (has links)
Time-resolved velocity information of carotid blood flow can be used to estimate haemodynamic conditions associated with carotid artery disease leading to stroke. MRI provides high-resolution measurement of such information but long scan time limits its clinical application in this area. In order to reduce scan time the MRI signal is often undersampled by skipping part of the signal during data acquisition. The aim of this work is to implement and evaluate different undersampling techniques for carotid velocity measurement on a 1.5 T clinical scanner. Most recent undersampling techniques assume spatial and temporal redundancies of real time-resolved MRI signal. In these techniques different undersampling strategies were proposed. Prior information or different assumptions of the nature of true signal were used in signal reconstruction. A brief review of these techniques and details of a representative technique, known as k-t BLAST, are presented. Another undersampling scheme, termed ktVD, is proposed to use predesigned undersampling patterns with variable sampling densities in both temporal and spatial dimensions. It aims to collect enough signal content at the signal acquisition stage and simplify signal reconstruction. Fidelity of the results from undersampled data is affected by many factors, such as signal dynamic content, degree of signal redundancy, noise level, degree of undersampling, undersampling patterns, and parameters of post-processing algorithms. Simulations and in vivo scans were conducted to investigate the effects of these factors in time-resolved 2D scans and time-resolved 3D scans. The results suggested velocity measurement became less reliable when they were obtained from less than 25% of the full signal. In time-resolved 3D scans the signal can be undersampled in either one or two spatial dimensions in addition to the temporal dimension. This allows more options in the design of undersampling patterns, which were tested in vivo. In order to test undersampling in three dimensions in high resolution 3D scans and measure velocity in three dimensions, a flow phantom was also scanned at high degrees of undersampling to test the proposed method.
|
3 |
Algorithms and methodology for incoherent undersampling based acquisition of high speed signal waveforms using low cost test instrumentationBhatta, Debesh 07 January 2016 (has links)
The objective of this research is to develop and demonstrate low-complexity, robust, frequency-scalable, wide-band waveform acquisition techniques for testing high speed com-
munication systems.
High resolution waveform capture is a versatile testing tool that enables flexible test strategies. However, waveform capture at high data rates requires costly hardware because the increased bandwidth of the signal waveform leads to an increase in the sampling rate requirement, cost of front-end components, and sensitivity to phase errors in traditional (source) synchronous Nyquist-rate tester architectures. The hardware cost and complexity of wide-band waveform acquisition systems can, however, be significantly reduced by using (trigger-free) incoherent undersampling to achieve reduced sampling rates and robustness to phase errors in signal paths. Reducing the hardware cost of such a system using incoherent undersampling requires increased signal processing at the back end.
This research proposes computationally-efficient, time-domain waveform reconstruction algorithms to improve both performance, and scope of existing incoherent undersampling-
based test instrumentation. Supporting hardware architectures are developed to extend the application of incoherent undersampling-based waveform acquisition techniques to linearity testing of high-speed radio-frequency components without any synchronization between the signals involved, and to the acquisition of wide-band signals beyond the track-and-hold
bandwidth barrier of the traditional incoherent undersampling architectures, using multi-channel bandwidth interleaving. The bandwidth is extended in a source-incoherent framework by using mixers to down convert high-frequency signal components to base band followed by digitization using undersampling, and back-end signal processing to reconstruct the original wide-band signal from multiple band-pass components.
|
4 |
TOF-PET Imaging within the Framework of Sparse ReconstructionLao, Dapeng 2012 May 1900 (has links)
Recently, the limited-angle TOF-PET system has become an active topic mainly due to the considerable reduction of hardware cost and potential applicability for performing needle biopsy on patients while in the scanner. However, this kind of measurement configurations oftentimes suffers from the deteriorated reconstructed images, because insufficient data are observed. The established theory of Compressed Sensing (CS) provides a potential framework for attacking this problem. CS claims that the imaged object can be faithfully recovered from highly underdetermined observations, provided that it can be sparse in some transformed domain.
In here a first attempt was made in applying the CS framework to TOF-PET imaging for two undersampling configurations. First, to deal with undersampling TOF-PET imaging, an efficient sparsity-promoted algorithm was developed for combined regularizations of p-TV and l1-norm, where it was found that (a) it is capable of providing better reconstruction than the traditional EM algorithm, and (b) the 0.5-TV regularization was significantly superior to the regularizations of 0-TV and 1-TV, which are widely investigated in the open literature. Second, a general framework was proposed for sparsity-promoted ART, where accelerated techniques of multi-step and order-set were simultaneously used. From the results, it was observed that the accelerated sparsity-promoted ART method was capable of providing better reconstruction than traditional ART. Finally, a relationship was established between the number of detectors (or the range of angle) and TOF time resolution, which provided an empirical guidance for designing novel low-cost TOF-PET systems while ensuring good reconstruction quality.
|
5 |
The Application of FROID in MR Image ReconstructionVu, Linda January 2010 (has links)
In magnetic resonance imaging (MRI), sampling methods that lead to incomplete data coverage of k-space are used to accelerate imaging and reduce overall scan time. Non-Cartesian sampling trajectories such as radial, spiral, and random trajectories are employed to facilitate advanced imaging techniques, such as compressed sensing, or to provide more efficient coverage of k-space for a shorter scan period. When k-space is undersampled or unevenly sampled, traditional methods of transforming Fourier data to obtain the desired image, such as the FFT, may no longer be applicable.
The Fourier reconstruction of optical interferometer data (FROID) algorithm is a novel reconstruction method developed by A. R. Hajian that has been successful in the field of optical interferometry in reconstructing images from sparsely and unevenly sampled data. It is applicable to cases where the collected data is a Fourier representation of the desired image or spectrum. The framework presented allows for a priori information, such as the positions of the sampled points, to be incorporated into the reconstruction of images. Initially, FROID assumes a guess of the real-valued spectrum or image in the form of an interpolated function and calculates the corresponding integral Fourier transform. Amplitudes are then sampled in the Fourier space at locations corresponding to the acquired measurements to form a model dataset. The guess spectrum or image is then adjusted such that the model dataset in the Fourier space is least squares fitted to measured values.
In this thesis, FROID has been adapted and implemented for use in MRI where k-space is the Fourier transform of the desired image. By forming a continuous mapping of the image and modelling data in the Fourier space, a comparison and optimization with respect to data acquired in k-space that is either undersampled or irregularly sampled can be performed as long as the sampling positions are known. To apply FROID to the reconstruction of magnetic resonance images, an appropriate objective function that expresses the desired least squares fit criteria was defined and the model for interpolating Fourier data was extended to include complex values of an image. When an image with two Gaussian functions was tested, FROID was able to reconstruct images from data randomly sampled in k-space and was not restricted to data sampled evenly on a Cartesian grid. An MR image of a bone with complex values was also reconstructed using FROID and the magnitude image was compared to that reconstructed by the FFT. It was found that FROID outperformed the FFT in certain cases even when data were rectilinearly sampled.
|
6 |
The Application of FROID in MR Image ReconstructionVu, Linda January 2010 (has links)
In magnetic resonance imaging (MRI), sampling methods that lead to incomplete data coverage of k-space are used to accelerate imaging and reduce overall scan time. Non-Cartesian sampling trajectories such as radial, spiral, and random trajectories are employed to facilitate advanced imaging techniques, such as compressed sensing, or to provide more efficient coverage of k-space for a shorter scan period. When k-space is undersampled or unevenly sampled, traditional methods of transforming Fourier data to obtain the desired image, such as the FFT, may no longer be applicable.
The Fourier reconstruction of optical interferometer data (FROID) algorithm is a novel reconstruction method developed by A. R. Hajian that has been successful in the field of optical interferometry in reconstructing images from sparsely and unevenly sampled data. It is applicable to cases where the collected data is a Fourier representation of the desired image or spectrum. The framework presented allows for a priori information, such as the positions of the sampled points, to be incorporated into the reconstruction of images. Initially, FROID assumes a guess of the real-valued spectrum or image in the form of an interpolated function and calculates the corresponding integral Fourier transform. Amplitudes are then sampled in the Fourier space at locations corresponding to the acquired measurements to form a model dataset. The guess spectrum or image is then adjusted such that the model dataset in the Fourier space is least squares fitted to measured values.
In this thesis, FROID has been adapted and implemented for use in MRI where k-space is the Fourier transform of the desired image. By forming a continuous mapping of the image and modelling data in the Fourier space, a comparison and optimization with respect to data acquired in k-space that is either undersampled or irregularly sampled can be performed as long as the sampling positions are known. To apply FROID to the reconstruction of magnetic resonance images, an appropriate objective function that expresses the desired least squares fit criteria was defined and the model for interpolating Fourier data was extended to include complex values of an image. When an image with two Gaussian functions was tested, FROID was able to reconstruct images from data randomly sampled in k-space and was not restricted to data sampled evenly on a Cartesian grid. An MR image of a bone with complex values was also reconstructed using FROID and the magnitude image was compared to that reconstructed by the FFT. It was found that FROID outperformed the FFT in certain cases even when data were rectilinearly sampled.
|
7 |
Use of machine learning in bankruptcy prediction with highly imbalanced datasets : The impact of sampling methodsMahembe, Wonder January 2024 (has links)
Since Altman’s 1968 discriminant analysis model for corporate bankruptcy prediction, there have been numerous studies applying statistical and machine learning (ML) models in predicting bankruptcy under various contexts. ML models have been proven to be highly accurate in bankruptcy prediction up to three years before the event, more so than statistical models. A major limitation of ML models is that they suffer from an inability to handle highly imbalanced datasets, which has resulted in the development of a plethora of oversampling and undersampling methods for addressing class imbalances. However, current research on the impact of different sampling methods on the predictive performance of ML models is fragmented, inconsistent, and limited. This thesis investigated whether the choice of sampling method led to significant differences in the performance of five predictive algorithms: logistic regression, multiple discriminant analysis(MDA), random forests, Extreme Gradient Boosting (XGBoost), and support vector machines(SVM). Four oversampling methods (random oversampling (ROWR), synthetic minority oversampling technique (SMOTE), oversampling based on propensity scores (OBPS), and oversampling based on weighted nearest neighbour (WNN)) and three undersampling methods (random undersampling (RU), undersampling based on clustering from nearest neighbour (CFNN), and undersampling based on clustering from Gaussian mixture methods (GMM) were tested. The dataset was made up of non-listed Swedish restaurant businesses (1998 – 2021) obtained from the business registry of Sweden, having 10,696 companies with 335 bankrupt instances. Results, assessed through 10-fold cross-validated AUC scores, reveal those oversampling methods generally outperformed undersampling methods. SMOTE performed highest in four of five algorithms, while WNN performed highest with the random forest model. Results of Wilcoxon’s signed rank test showed that some differences between oversampling and undersampling were statistically significant, but differences within each group were not significant. Further, results showed that while the XGBoost had the highest AUC score of all predictive algorithms, it was also the most sensitive to different sampling methods, while MDA was the least sensitive. Overall, it was concluded that the choice of sampling method can significantly impact the performance of different algorithms, and thus users should consider both the algorithm’s sensitivity and the comparative performance of the sampling methods. The thesis’s results challenge some prior findings and suggests avenues for further exploration, highlighting the importance of selecting appropriate sampling methods when working with highly imbalanced datasets.
|
8 |
Implementation and Evaluation of a RF Receiver Architecture Using an Undersampling Track-and-Hold Circuit / Implementation och utvärdering av en RF-mottagare baserad på en undersamplande track-and-hold-kretsDahlbäck, Magnus January 2003 (has links)
<p>Today's radio frequency receivers for digital wireless communication are getting more and more complex. A single receiver unit should support multiple bands, have a wide bandwidth, be flexible and show good performance. To fulfil these requirements, new receiver architectures have to be developed and used. One possible alternative is the RF undersampling architecture. </p><p>This thesis evaluates the RF undersampling architecture, which make use of an undersampling track-and-hold circuit with very wide bandwidth to perform direct sampling of the RF carrier before the analogue-to-digital converter. The architecture’s main advantages and drawbacks are identified and analyzed. Also, techniques and improvements to solve or reduce the main problems of the RF undersampling receiver are proposed.</p>
|
9 |
Implementation and Evaluation of a RF Receiver Architecture Using an Undersampling Track-and-Hold Circuit / Implementation och utvärdering av en RF-mottagare baserad på en undersamplande track-and-hold-kretsDahlbäck, Magnus January 2003 (has links)
Today's radio frequency receivers for digital wireless communication are getting more and more complex. A single receiver unit should support multiple bands, have a wide bandwidth, be flexible and show good performance. To fulfil these requirements, new receiver architectures have to be developed and used. One possible alternative is the RF undersampling architecture. This thesis evaluates the RF undersampling architecture, which make use of an undersampling track-and-hold circuit with very wide bandwidth to perform direct sampling of the RF carrier before the analogue-to-digital converter. The architecture’s main advantages and drawbacks are identified and analyzed. Also, techniques and improvements to solve or reduce the main problems of the RF undersampling receiver are proposed.
|
10 |
Predicting SNI Codes from Company Descriptions : A Machine Learning SolutionLindholm, Erik, Nilsson, Jonas January 2023 (has links)
This study aims to develop an automated solution for assigning area of industry codes to businesses based on the contents of their business descriptions. The Swedish standard industrial classification (SNI) is a system used by Statistics Sweden (SCB) for categorizing businesses for their statistics reports. Assignment of SNI codes has so far been done manually by the person registering a new company, but this is a far from optimal solution. Some of the 88 main group areas of industry are hard to tell apart from one another, and this often leads to incorrect assignments. Our approach to this problem was to train a machine learning model using the Naive Bayes and SVM classifier algorithms and conduct an experiment. In 2019, Dahlqvist and Strandlund had attempted this and reached an accuracy score of 52 percent by use of the gradient boosting classifier, but this was considered too low for real-world implementation. Our main goal was to achieve a higher accuracy than that of Dahlqvist and Strandlund, which we eventually succeeded in - our best-performing SVM model reached a score of 60.11 percent. Similarly to Dahlqvist and Strandlund, we concluded that the low quality of the dataset was the main obstacle for achieving higher scores. The dataset we used was severely imbalanced, and much time was spent on investigating and applying oversampling and undersampling as strategies for mitigating this problem. However, we found during the testing phase that none of these strategies had any positive effect on the accuracy scores.
|
Page generated in 0.0973 seconds