Spelling suggestions: "subject:"ofcomplexity"" "subject:"costcomplexity""
31 |
Jämförelse av bluetooth codecs med fokus på batteriladdning, CPU användning och räckvidd / Comparison of bluetooth codecs with focus on battery drainage, CPU usage and rangeLarsson, Daniel, Ly Khuu, Kevin January 2022 (has links)
With the constant advances in technology, people are using more wireless products, such as earphones or speakers whereas many of them use Bluetooth. With the current advances in Bluetooth technology, consumers and manufacturers have a hard time keeping up with the pace. Thus, when it comes to factors such as battery drainage, CPU usage, and range there is missing knowledge. This study is conducted to find out what effect the different codecs have on these factors, by comparing the two most commonly used codecs SBC and AAC. Using a codec that has lower battery drainage whilst still having a good enough audio quality can have a positive impact on our society and environment. Needing less electricity, lessens the overall energy consumption and directly lowers the energy production. Our results indicate that there is a significant difference in CPU usage but not in battery drainage or range.
|
32 |
Terahertz-Band Ultra-Massive MIMO Data Detection and DecodingJemaa, Hakim 04 April 2022 (has links)
As the quest for higher data rates continues, future generations of wireless communications are expected to concur even higher frequency bands, particularly at terahertz (THz) frequencies. Even though the vast bandwidths at the THz band promise terabit-per-second (Tbps) data rates, current baseband technologies do not support such high rates. In particular, the complexities of Tbps channel code decoding and ultra-massive multiple-input multiple-output data detection are prohibitive.
This work addresses the efficient data detection and channel-code decoding problem under THz-band channel conditions and Tbps baseband processing limitations. We propose ultra-massive multiple-input multiple-output THz channel models, then investigate the corresponding performance of several candidate data detection and coding schemes. We further investigate the complexity of different detectors and decoders, motivating parallelizability at both levels. We recommend which detector to combine best with which channel code decoder under specific THz channel characteristics.
|
33 |
FPGA Realization of Low Register Systolic Multipliers over GF(2^m)Shao, Qiliang January 2016 (has links)
No description available.
|
34 |
Player Onboarding in a Low-Complexity Game Favouring Implicit Instructions : A Case Study of the Game The Social GripHatzl, Anna, Hedberg, Ottilia, Keramidas, Ilias, Mardunovich, Daniel, Jankovic, Bozidar January 2024 (has links)
This paper sought to understand how onboarding should be designed for a low-complexity game that favours implicit instructions. Low-complexity games are defined as having a low number of mechanics with predictable gameplay. This may make explicit instructions less applicable for onboarding players in those types of games, as players may have a more enjoyable experience learning the game with higher agency. Currently, there is a lack of studies focusing on the subject of onboarding in low-complexity games. This paper aims to contribute findings to the topic, which may prove relevant for game designers seeking to design viable onboarding methods for their low-complexity games. For this study, we conducted research through design. We iterated our own low-complexity game, The Social Grip, over three playtests. The changes were motivated by the results presented during each playtest and iterated within the cognitive load and feedback systems framework. We found that in low-complexity games, it is important to also keep the environment low in complexity to ensure players notice intentionally designed landmarks or breadcrumbs. Finally, we concluded that explicit instructions may be usable in areas that implicit instructions cannot cover, such as teaching players keyboard shortcuts.
|
35 |
Reconnaissance de langages en temps réel par des automates cellulaires avec contraintesBorello, Alex 12 December 2011 (has links)
Dans cette thèse, on s'intéresse aux automates cellulaires en tant que modèle de calcul permettant de reconnaître des langages. Dans un tel domaine, il est toujours difficile d'établir des résultats négatifs, typiquement de prouver qu'un langage donné n'est pas reconnu en une certaine fonction de temps par une certaine classe d'automates. On se focalisera en particulier sur les classes de faible complexité comme le temps réel, au sujet desquelles de nombreuses questions restent ouvertes.Dans une première partie, on propose plusieurs manières d'affaiblir encore les classes de langages étudiées, permettant ainsi d'obtenir des exemples de résultats négatifs. Dans une seconde partie, on montre un théorème d'accélération par automate cellulaire d'un modèle séquentiel, les automates finis oublieux. Ce modèle est une version a priori affaiblie, mais non triviale, des automates finis à plusieurs têtes de lecture. / This document deals with cellular automata as a model of computation used to recognise languages. In such a domain, it is always difficult to provide negative results, that is, typically, to prove that a given language is not recognised in some function of time by some class of automata. The document focuses in particular on the low-complexity classes such as real time, about which a lot of questions remain open since several decades.In a first part, several techniques to weaken further still these classes of languages are investigated, thereby bringing examples of negative results. A second part is dedicated to the comparison of cellular automata with another model language recognition, namely multi-head finite automata. This leads to speed-up theorem when finite automata are oblivious, which makes them a priori weaker than in the general case but leaves them a nontrivial power.
|
36 |
Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New ZealandZhu, Jihai January 2007 (has links)
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
|
37 |
Studies on Design and Implementation of Low-Complexity Digital FiltersOhlsson, Henrik January 2005 (has links)
In this thesis we discuss design and implementation of low-complexity digital filters. Digital filters are key components in most digital signal processing (DSP) systems and are, for example, used for interpolation and decimation. A typical application for the filters considered in this work is mobile communication systems, where high throughput and low power consumption are required. In the first part of the thesis we discuss implementation of high throughput lattice wave digital filters (LWDFs). Here arithmetic transformation of first- and second-order Richards’ allpass sections are proposed. The transformations reduces the iteration period bound of the filter realization, which can be used to increase the throughput or reduce the power consumption through power supply voltage scaling. Implementation of LWDFs using redundant, carry-save arithmetic is considered and the proposed arithmetic transformations are evaluated with respect to throughput and area requirements. In the second part of the thesis we discuss three case studies of implementations of digital filters for typical applications with requirements on high throughput and low power consumption. The first involves the design and implementation of a digital down converter (DDC) for a multiple antenna element radar receiver. The DDC is used to convert a real IF input signal into a complex baseband signal composed of an inphase and a quadrature component. The DDC includes bandpass sampling, digital I/Q demodulation, decimation, and filtering and three different DDC realizations are proposed and evaluated. The second case study is a combined interpolator and decimator filter for use in an OFDM system. The analog-to-digital converters (ADCs) and the digital-to-analog converters (DACs) work at a sample rate twice as high as the Nyquist rate. Hence, interpolation and decimation by a factor of two is required. Also, some channel shaping is performed which complicates the filter design as well as the implementation. Frequency masking techniques and novel filter structures was used for the implementation. The combined interpolator and decimator was successfully implemented using an LWDF in a 0.35 mm CMOS process using carry-save arithmetic. The third case study is the implementation of a high-speed decimation filter for a SD ADC. The decimator has an input data rate of 16 Gsample/s and the decimation factor is 128. The decimation is performed using two cascaded digital filters, a comb filter followed by a linear-phase FIR filter. A novel hardware structure for single-bit input digital filters is proposed. The proposed structure was found to be competitive and was used for the implementation. The decimator filter was successfully implemented in a 0.18 mm CMOS process using standard cells. In the third part of the thesis we discuss efficient realization of sum-of-products and multiple-constant multiplications that are used in, for example, FIR filters. We propose several new difference methods that result in realizations with a low number of adders. The proposed design methods have low complexity, i.e., they can be included in the search for quantized filter coefficients.
|
38 |
Signal Detection Strategies and Algorithms for Multiple-Input Multiple-Output ChannelsWaters, Deric Wayne 16 November 2005 (has links)
In todays society, a growing number of users are demanding more sophisticated services from wireless communication devices. In order to meet these rising demands, it has been proposed to increase the capacity of the wireless channel by using more than one antenna at the transmitter and receiver, thereby creating multiple-input multiple-output (MIMO) channels. Using MIMO communication techniques is a promising way to improve wireless communication technology because in a rich-scattering environment the capacity increases linearly with the number of antennas. However, increasing the number of transmit antennas also increases the complexity of detection at an exponential rate. So while MIMO channels have the potential to greatly increase the capacity of wireless communication systems, they also force a greater computational burden on the receiver.
Even suboptimal MIMO detectors that have relatively low complexity, have been shown to achieve unprecedented high spectral efficiency. However, their performance is far inferior to the optimal MIMO detector, meaning they require more transmit power. The fact that the optimal MIMO detector is an impractical solution due to its prohibitive complexity, leaves a performance gap between detectors that require reasonable complexity and the optimal detector. The objective of this research is to bridge this gap and provide new solutions for managing the inherent performance-complexity trade-off in MIMO detection.
The optimally-ordered decision-feedback (BODF) detector is a standard low-complexity detector. The contributions of this thesis can be regarded as ways to either improve its performance or reduce its complexity - or both.
We propose a novel algorithm to implement the BODF detector based on noise-prediction. This algorithm is more computationally efficient than previously reported implementations of the BODF detector. Another benefit of this algorithm is that it can be used to easily upgrade an existing linear detector into a BODF detector.
We propose the partial decision-feedback detector as a strategy to achieve nearly the same performance as the BODF detector, while requiring nearly the same complexity as the linear detector.
We propose the family of Chase detectors that allow the receiver to trade performance for reduced complexity. By adapting some simple parameters, a Chase detector may achieve near-ML performance or have near-minimal complexity. We also propose two new detection strategies that belong to the family of Chase detectors called the B-Chase and S-Chase detectors. Both of these detectors can achieve near-optimal performance with less complexity than existing detectors.
Finally, we propose the double-sorted lattice-reduction algorithm that achieves near-optimal performance with near-BODF complexity when combined with the decision-feedback detector.
|
39 |
A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying ChannelsYapici, Yavuz 01 May 2011 (has links) (PDF)
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.
|
40 |
Wireless receiver designs: from information theory to VLSI implementationZhang, Wei Zhang 06 October 2009 (has links)
Receiver design, especially equalizer design, in communications is a major concern in both academia and industry. It is a problem with both theoretical challenges and severe implementation hurdles. While much research has been focused on reducing complexity for optimal or near-optimal schemes, it is still common practice in industry to use simple techniques (such as linear equalization) that are generally significantly inferior. Although digital signal processing (DSP) technologies have been applied to wireless communications to enhance the throughput, the users' demands for more data and higher rate have revealed new
challenges. For example, to collect the diversity and combat fading channels, in addition to the transmitter designs that enable the diversity, we also require the receiver to be able to collect the prepared diversity.
Most wireless transmissions can be modeled as a linear block transmission system. Given a linear block transmission model assumption, maximum likelihood equalizers (MLEs) or near-ML decoders have been adopted at the receiver to collect diversity which is an important metric for performance, but these decoders exhibit high complexity. To reduce the decoding complexity, low-complexity equalizers, such as linear equalizers (LEs) and
decision feedback equalizers (DFEs) are often adopted. These methods, however, may not utilize the diversity enabled by the transmitter and as a result have degraded performance compared to
MLEs.
In this dissertation, we will present efficient receiver designs that achieve low bit-error-rate (BER), high mutual information, and low decoding complexity. Our approach is
to first investigate the error performance and mutual information of existing low-complexity equalizers to reveal the fundamental condition to achieve full diversity with LEs. We show that the fundamental condition for LEs to collect the same (outage) diversity as MLE is that the channels need to be constrained within a certain distance from orthogonality. The orthogonality deficiency (od) is adopted to quantify the distance of channels to orthogonality while other existing metrics are also introduced and compared. To meet the fundamental condition and achieve full diversity, a hybrid equalizer framework is proposed. The performance-complexity trade-off of hybrid equalizers is quantified by deriving the distribution of od.
Another approach is to apply lattice reduction (LR) techniques to improve the ``quality' of channel matrices. We present two widely adopted LR methods in wireless communications, the Lenstra-Lenstra-Lovasz (LLL) algorithm [51] and Seysen's algorithm (SA), by providing detailed descriptions and pseudo codes. The properties of output matrices of the LLL algorithm and SA are also quantified. Furthermore, other LR algorithms are also briefly introduced.
After introducing LR algorithms, we show how to adopt them into the wireless communication decoding process by presenting LR-aided hard-output detectors and LR-aided soft-output detectors for coded systems, respectively. We also analyze the performance of proposed efficient receivers from the perspective of diversity, mutual information, and complexity. We prove that LR techniques help to restore the diversity of low-complexity equalizers without increasing the complexity significantly.
When it comes to practical systems and simulation tool, e.g., MATLAB, only finite bits are adopted to represent numbers. Therefore, we revisit the diversity analysis for finite-bit represented systems. We illustrate that the diversity of MLE for systems with finite-bit representation is determined by the number of non-vanishing eigenvalues. It is also shown that although theoretically LR-aided detectors collect the same diversity as MLE in the real/complex field, it may show different diversity orders when finite-bit representation exists. Finally, the VLSI implementation of the complex LLL algorithms is provided to verify the practicality of our proposed designs.
|
Page generated in 0.0444 seconds