• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 59
  • 16
  • 12
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 313
  • 313
  • 313
  • 313
  • 83
  • 68
  • 58
  • 49
  • 43
  • 32
  • 31
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Blind adaptive array techniques for mobile satellite communications

Terry, John D. 05 1900 (has links)
No description available.
192

Numerical transformations for area, power, and testability optimization in the synthesis of digtal signal processing ASICs

Nguyen, Huy Tam 05 1900 (has links)
No description available.
193

Exploration of alternatives to general-purpose computers in neural simulation

Graas, Estelle Laure 08 1900 (has links)
No description available.
194

Application of time frequency representations to characterize ultrasonic signals

Niethammer, Marc 08 1900 (has links)
No description available.
195

Reducing measurement uncertainty in a DSP-based mixed-signal test environment

Taillefer, Chris January 2003 (has links)
FFT-based tests (e.g. gain, distortion, SNR, etc.) from a device-under-test (DUT) exhibit normal distributions when the measurement is repeated many times. Hence, a statistical approach to evaluate the accuracy of these measurements is traditionally applied. The noise in a DSP-based mixed-signal test system severely limits its measurement accuracy. Moreover, in high-speed sampled-channel applications the jitter-induced noise from the DUT and test equipment can severely impede accurate measurements. / A new digitizer architecture and post-processing methodology is proposed to increase the measurement accuracy of the DUT and the test equipment. An optimal digitizer design is presented which removes any measurement bias due to noise and greatly improves measurement repeatability. Most importantly, the presented system improves accuracy in the same test time as any conventional test. / An integrated mixed-signal test core was implemented in TSMC's 0.18 mum mixed-signal process. Experimental results obtained from the mixed-signal integrated test core validate the proposed digitizer architecture and post processing technique. Bias errors were successfully removed and measurement variance was improved by a factor of 5.
196

A micro data flow (MDF) : a data flow approach to self-timed VLSI system design for DSP

Merani, Lalit T. 24 August 1993 (has links)
Synchronization is one of the important issues in digital system design. While other approaches have been intriguing, up until now a globally clocked timing discipline has been the dominant design philosophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. VLSI promises great processing power at low cost. This increase in computation power has been obtained by scaling the digital IC process. But as this scaling continues, it is doubtful that the advantages of faster devices can be fully exploited. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. In this thesis, some alternative approaches to synchronization in digital system design are described and developed. We owe these techniques to a long history of effort in both digital computational system design as well as digital communication system design. The latter field is relevant because large propagation delays have always been a dominant consideration in its design methods. Asynchronous design gives better performance than comparable synchronous design in situations for which a global synchronization with a high speed clock becomes a constraint for greater system throughput. Asynchronous circuits with unbounded gate delays, or self-timed digital circuit can be designed by employing either of two request-acknowledge protocols 4-cycle and 2-cycle. We will also present an alternative approach to the problem of mapping computation algorithms directly into asynchronous circuits. Data flow graph or language is used to describe the computation algorithms. The data flow primitives have been designed using both the 2-cycle and 4-cycle signaling schemes which are compared in terms of performance and transistor count. The 2-cycle implementations prove to be better than their 4-cycle counterparts. A promising application of self-timed design is in high performance DSP systems. Since there is no global constraint of clock distribution, localized forwardonly connection allows computation to be extended and sped up using pipelining. A decimation filter was designed and simulated to check the system level performance of the two protocols. Simulations were carried out using VHDL for high level definition of the design. The simulation results will demonstrate not only the efficacy of our synthesis procedure but also the improved efficiency of the 2-cycle scheme over the 4- cycle scheme. / Graduation date: 1994
197

Precoder design and adaptive modulation for MIMO broadcast channels

Huang, Kuan Lun, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
Multiple-input multiple-output (MIMO) technology, originated in the 1990s, is an emerging and fast growing area of communication research due to the ability to provide diversity as well as transmission degrees-of-freedom. Recent research focus on MIMO systems has shifted from the point-to-point link to the one-to-many multiuser links due to the ever increasing demand for multimedia-intensive services from users. The downlink of a multiuser transmission is called the broadcast channel (BC) and the reverse many-to-one uplink is termed the multiple access channel (MAC). Early studies in the MIMO BC and the MIMO MAC were mostly information-theoretic in nature. In particular, the characterizations of the capacity regions of the two systems were of primary concerns. The information-theoretic results suggest the optimal uplink detection scheme involves successive interference cancellation while successive application of dirty paper coding at the transmitter is optimal in the downlink channels. Over the past few years, after the full characterizations of the capacity regions, several practical precoders had been suggested to realize the benefits of MIMO multiuser transmission. However, linear precoders such as the zero-forcing (ZF) and the MMSE precoders fall short on the achievable capacity despite their simple structure. Nonlinear precoders such as the ZF dirty paper (ZF-DP) and the the MMSE generalized decision feedback equalizer-type (MMSE-GDFE) precoders demonstrated promising performance but suffered from either restriction on the number of antennas at users, i.e. ZF-DP, or high computational load for the transmit filter, i.e. MMSE-GDFE. An novice MMSE feedback precoder (MMSE-FBP) with low computational requirement was proposed and its performance was shown to come very close to the bound suggested by information theory. In this thesis, we undertake investigation of the causes of the capacity inferiority and come to the conclusion that power control is necessary in a multiuser environment. New schemes that address the power control issue are proposed and their performances are evaluated and compared. Adaptive modulation is an effective and powerful technique that can increase the spectral efficiency in a fading environment remarkably. It works by observing the channel variations and adapts the transmission power and/or rate to counteract the instabilities of the channel. This thesis extends the pioneering study of adaptive modulation on single-input single-output (8180) Gaussian channel to the MIMO BC. We explore various combinations of power and rate adaptions and observe their impact on the system performance. In particular, we present analytical and simulation results on the successiveness of adaptive modulation in maximizing multiuser spectral efficiency. Furthermore, empirical research is conducted to validate its effectiveness in optimizing the overall system reliability.
198

Digital filters and cascade control compensators / Alan Graham Bolton

Bolton, Alan Graham January 1990 (has links)
Bibliography: leaves 176-188 / xvii, 188 leaves : ill ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1992?
199

Digital compensation techniques for in-phase quadrature (IQ) modulator

Lim, Anthony Galvin K. C. January 2004 (has links)
[Formulae and special characters can only be approximated here. Please see the pdf version of the abstract for an accurate reproduction.] In In-phase/Quadrature (IQ) modulator generating Continuous-Phase-Frequency-Shift-Keying (CPFSK) signals, shortcomings in the implementation of the analogue reconstruction filters result in the loss of the constant envelope property of the output signal. Ripples in the envelope function cause undesirable spreading of the transmitted signal spectrum into adjacent channels when the signal passes through non-linear elements in the transmission path. This results in the failure of the transmitted signal in meeting transmission standards requirements. Therefore, digital techniques compensating for these shortcomings play an important role in enhancing the performance of the IQ modulator. In this thesis, several techniques to compensate for the irregularities in the I and Q channels are presented. The main emphasis is on preserving a constant magnitude and linear phase characteristics in the pass-band of the analogue filters as well as compensating for the imbalances between the I and Q channels. A generic digital pre-compensation model is used, and based on this model, the digital compensation schemes are formulated using control and signal processing techniques. Four digital compensation techniques are proposed and analysed. The first method is based on H2 norm minimization while the second method solves for the pre-compensation filters by posing the problem as one of H∞ optimisation. The third method stems from the well-known principle of Wiener filtering. Note that the digital compensation filters found using these methods are computed off-line. We then proceed by designing adaptive compensation filters that runs on-line and uses the “live” modulator input data to make the necessary measurements and compensations. These adaptive filters are computed based on the well-known Least-Mean-Square (LMS) algorithm. The advantage of using this approach is that the modulator does not require to be taken off-line in the process of calculating the pre-compensation filters and thus will not disrupt the normal operation of the modulator. The compensation performance of all methods is studied analytically using computer simulations and practical experiments. The results indicate that the proposed methods are effective and are able to provide substantial compensation for the shortcomings of the analogue reconstruction filters in the I and Q channels. In addition, the adaptive compensation scheme, implemented on a DSP platform shows that there is significant reduction in side-lobe levels for the compensated signal spectrum.
200

Memory Study and Dataflow Representations for Rapid Prototyping of Signal Processing Applications on MPSoCs / Etude mémoire et représentations flux de données pour le prototypage rapide d'applications de traitement du signal sur MPSoCs

Desnos, Karol 26 September 2014 (has links)
Le développement d’applications de traitement du signal pour des architectures multi-coeurs embarquées est une tâche complexe qui nécessite la prise en compte de nombreuses contraintes. Parmi ces contraintes figurent les contraintes temps réel, les limitations énergétiques, ou encore la quantité limitée des ressources matérielles disponibles. Pour satisfaire ces contraintes, une connaissance précise des caractéristiques des applications à implémenter est nécessaire. La caractérisation des besoins en mémoire d’une application est primordiale car cette propriété a un impact important sur la qualité et les performances finales du système développé. En effet, les composants de mémoire d’un système embarqué peuvent occuper jusqu’à 80% de la surface totale de silicium et être responsable d’une majeure partie de la consommation énergétique. Malgré cela, les limitations mémoires restent une contrainte forte augmentant considérablement les temps de développements. Les modèles de calcul de type flux de données sont couramment utilisés pour la spécification, l’analyse et l’optimisation d’applications de traitement du signal. La popularité de ces modèles est due à leur bonne analysabilité ainsi qu’à leur prédisposition à exprimer le parallélisme des applications. L’abstraction de toute notion de temps dans les diagrammes flux de données facilite l’exploitation du parallélisme offert par les architectures multi-coeurs hétérogènes. Dans cette thèse, nous présentons une méthode complète pour l’étude des caractéristiques mémoires d’applications de traitement du signal modélisées par des diagrammes flux de données. La méthode proposée couvre la caractérisation théorique d’applications, indépendamment des architectures ciblées, jusqu’à l’allocation quasi-optimale de ces applications en mémoire partagée d’architectures multi-coeurs embarquées. L’implémentation de cette méthode au sein d’un outil de prototypage rapide permet son évaluation sur des applications récentes de vision par ordinateur, de télécommunication, et de multimédia. Certaines applications de traitement du signal au comportement très dynamique ne pouvant être modélisé par le modèle de calcul supporté par notre méthode, nous proposons un nouveau méta-modèle de type flux de données répondant à ce besoin. Ce nouveau méta-modèle permet la modélisation d’applications reconfigurables et modulaires tout en préservant la prédictibilité, la concision et la lisibilité des diagrammes de flux de données. / The development of embedded Digital Signal Processing (DSP) applications for Multiprocessor Systems-on-Chips (MPSoCs) is a complex task requiring the consideration of many constraints including real-time requirements, power consumption restrictions, and limited hardware resources. To satisfy these constraints, it is critical to understand the general characteristics of a given application: its behavior and its requirements in terms of MPSoC resources. In particular, the memory requirements of an application strongly impact the quality and performance of an embedded system, as the silicon area occupied by the memory can be as large as 80% of a chip and may be responsible for a major part of its power consumption. Despite the large overhead, limited memory resources remain an important constraint that considerably increases the development time of embedded systems. Dataflow Models of Computation (MoCs) are widely used for the specification, analysis, and optimization of DSP applications. The popularity of dataflow MoCs is due to their great analyzability and their natural expressivity of the parallelism of a DSP application. The abstraction of time in dataflow MoCs is particularly suitable for exploiting the parallelism offered by heterogeneous MPSoCs. In this thesis, we propose a complete method to study the important aspect of memory characteristic of a DSP application modeled with a dataflow graph. The proposed method spans the theoretical, architecture-independent memory characterization to the quasi-optimal static memory allocation of an application on a real shared-memory MPSoC. The proposed method, implemented as part of a rapid prototyping framework, is extensively tested on a set of state-of-the-art applications from the computer-vision, the telecommunication, and the multimedia domains. Then, because the dataflow MoC used in our method is unable to model applications with a dynamic behavior, we introduce a new dataflow meta-model to address the important challenge of managing dynamics in DSP-oriented representations. The new reconfigurable and composable dataflow meta-model strengthens the predictability, the conciseness and the readability of application descriptions.

Page generated in 0.1562 seconds