• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 59
  • 16
  • 12
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 308
  • 308
  • 308
  • 308
  • 81
  • 65
  • 53
  • 49
  • 43
  • 31
  • 31
  • 28
  • 28
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Exploration of alternatives to general-purpose computers in neural simulation

Graas, Estelle Laure 08 1900 (has links)
No description available.
192

Application of time frequency representations to characterize ultrasonic signals

Niethammer, Marc 08 1900 (has links)
No description available.
193

Reducing measurement uncertainty in a DSP-based mixed-signal test environment

Taillefer, Chris January 2003 (has links)
FFT-based tests (e.g. gain, distortion, SNR, etc.) from a device-under-test (DUT) exhibit normal distributions when the measurement is repeated many times. Hence, a statistical approach to evaluate the accuracy of these measurements is traditionally applied. The noise in a DSP-based mixed-signal test system severely limits its measurement accuracy. Moreover, in high-speed sampled-channel applications the jitter-induced noise from the DUT and test equipment can severely impede accurate measurements. / A new digitizer architecture and post-processing methodology is proposed to increase the measurement accuracy of the DUT and the test equipment. An optimal digitizer design is presented which removes any measurement bias due to noise and greatly improves measurement repeatability. Most importantly, the presented system improves accuracy in the same test time as any conventional test. / An integrated mixed-signal test core was implemented in TSMC's 0.18 mum mixed-signal process. Experimental results obtained from the mixed-signal integrated test core validate the proposed digitizer architecture and post processing technique. Bias errors were successfully removed and measurement variance was improved by a factor of 5.
194

A micro data flow (MDF) : a data flow approach to self-timed VLSI system design for DSP

Merani, Lalit T. 24 August 1993 (has links)
Synchronization is one of the important issues in digital system design. While other approaches have been intriguing, up until now a globally clocked timing discipline has been the dominant design philosophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. VLSI promises great processing power at low cost. This increase in computation power has been obtained by scaling the digital IC process. But as this scaling continues, it is doubtful that the advantages of faster devices can be fully exploited. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. In this thesis, some alternative approaches to synchronization in digital system design are described and developed. We owe these techniques to a long history of effort in both digital computational system design as well as digital communication system design. The latter field is relevant because large propagation delays have always been a dominant consideration in its design methods. Asynchronous design gives better performance than comparable synchronous design in situations for which a global synchronization with a high speed clock becomes a constraint for greater system throughput. Asynchronous circuits with unbounded gate delays, or self-timed digital circuit can be designed by employing either of two request-acknowledge protocols 4-cycle and 2-cycle. We will also present an alternative approach to the problem of mapping computation algorithms directly into asynchronous circuits. Data flow graph or language is used to describe the computation algorithms. The data flow primitives have been designed using both the 2-cycle and 4-cycle signaling schemes which are compared in terms of performance and transistor count. The 2-cycle implementations prove to be better than their 4-cycle counterparts. A promising application of self-timed design is in high performance DSP systems. Since there is no global constraint of clock distribution, localized forwardonly connection allows computation to be extended and sped up using pipelining. A decimation filter was designed and simulated to check the system level performance of the two protocols. Simulations were carried out using VHDL for high level definition of the design. The simulation results will demonstrate not only the efficacy of our synthesis procedure but also the improved efficiency of the 2-cycle scheme over the 4- cycle scheme. / Graduation date: 1994
195

Precoder design and adaptive modulation for MIMO broadcast channels

Huang, Kuan Lun, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2007 (has links)
Multiple-input multiple-output (MIMO) technology, originated in the 1990s, is an emerging and fast growing area of communication research due to the ability to provide diversity as well as transmission degrees-of-freedom. Recent research focus on MIMO systems has shifted from the point-to-point link to the one-to-many multiuser links due to the ever increasing demand for multimedia-intensive services from users. The downlink of a multiuser transmission is called the broadcast channel (BC) and the reverse many-to-one uplink is termed the multiple access channel (MAC). Early studies in the MIMO BC and the MIMO MAC were mostly information-theoretic in nature. In particular, the characterizations of the capacity regions of the two systems were of primary concerns. The information-theoretic results suggest the optimal uplink detection scheme involves successive interference cancellation while successive application of dirty paper coding at the transmitter is optimal in the downlink channels. Over the past few years, after the full characterizations of the capacity regions, several practical precoders had been suggested to realize the benefits of MIMO multiuser transmission. However, linear precoders such as the zero-forcing (ZF) and the MMSE precoders fall short on the achievable capacity despite their simple structure. Nonlinear precoders such as the ZF dirty paper (ZF-DP) and the the MMSE generalized decision feedback equalizer-type (MMSE-GDFE) precoders demonstrated promising performance but suffered from either restriction on the number of antennas at users, i.e. ZF-DP, or high computational load for the transmit filter, i.e. MMSE-GDFE. An novice MMSE feedback precoder (MMSE-FBP) with low computational requirement was proposed and its performance was shown to come very close to the bound suggested by information theory. In this thesis, we undertake investigation of the causes of the capacity inferiority and come to the conclusion that power control is necessary in a multiuser environment. New schemes that address the power control issue are proposed and their performances are evaluated and compared. Adaptive modulation is an effective and powerful technique that can increase the spectral efficiency in a fading environment remarkably. It works by observing the channel variations and adapts the transmission power and/or rate to counteract the instabilities of the channel. This thesis extends the pioneering study of adaptive modulation on single-input single-output (8180) Gaussian channel to the MIMO BC. We explore various combinations of power and rate adaptions and observe their impact on the system performance. In particular, we present analytical and simulation results on the successiveness of adaptive modulation in maximizing multiuser spectral efficiency. Furthermore, empirical research is conducted to validate its effectiveness in optimizing the overall system reliability.
196

Digital filters and cascade control compensators / Alan Graham Bolton

Bolton, Alan Graham January 1990 (has links)
Bibliography: leaves 176-188 / xvii, 188 leaves : ill ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1992?
197

Digital compensation techniques for in-phase quadrature (IQ) modulator

Lim, Anthony Galvin K. C. January 2004 (has links)
[Formulae and special characters can only be approximated here. Please see the pdf version of the abstract for an accurate reproduction.] In In-phase/Quadrature (IQ) modulator generating Continuous-Phase-Frequency-Shift-Keying (CPFSK) signals, shortcomings in the implementation of the analogue reconstruction filters result in the loss of the constant envelope property of the output signal. Ripples in the envelope function cause undesirable spreading of the transmitted signal spectrum into adjacent channels when the signal passes through non-linear elements in the transmission path. This results in the failure of the transmitted signal in meeting transmission standards requirements. Therefore, digital techniques compensating for these shortcomings play an important role in enhancing the performance of the IQ modulator. In this thesis, several techniques to compensate for the irregularities in the I and Q channels are presented. The main emphasis is on preserving a constant magnitude and linear phase characteristics in the pass-band of the analogue filters as well as compensating for the imbalances between the I and Q channels. A generic digital pre-compensation model is used, and based on this model, the digital compensation schemes are formulated using control and signal processing techniques. Four digital compensation techniques are proposed and analysed. The first method is based on H2 norm minimization while the second method solves for the pre-compensation filters by posing the problem as one of H∞ optimisation. The third method stems from the well-known principle of Wiener filtering. Note that the digital compensation filters found using these methods are computed off-line. We then proceed by designing adaptive compensation filters that runs on-line and uses the “live” modulator input data to make the necessary measurements and compensations. These adaptive filters are computed based on the well-known Least-Mean-Square (LMS) algorithm. The advantage of using this approach is that the modulator does not require to be taken off-line in the process of calculating the pre-compensation filters and thus will not disrupt the normal operation of the modulator. The compensation performance of all methods is studied analytically using computer simulations and practical experiments. The results indicate that the proposed methods are effective and are able to provide substantial compensation for the shortcomings of the analogue reconstruction filters in the I and Q channels. In addition, the adaptive compensation scheme, implemented on a DSP platform shows that there is significant reduction in side-lobe levels for the compensated signal spectrum.
198

Memory Study and Dataflow Representations for Rapid Prototyping of Signal Processing Applications on MPSoCs / Etude mémoire et représentations flux de données pour le prototypage rapide d'applications de traitement du signal sur MPSoCs

Desnos, Karol 26 September 2014 (has links)
Le développement d’applications de traitement du signal pour des architectures multi-coeurs embarquées est une tâche complexe qui nécessite la prise en compte de nombreuses contraintes. Parmi ces contraintes figurent les contraintes temps réel, les limitations énergétiques, ou encore la quantité limitée des ressources matérielles disponibles. Pour satisfaire ces contraintes, une connaissance précise des caractéristiques des applications à implémenter est nécessaire. La caractérisation des besoins en mémoire d’une application est primordiale car cette propriété a un impact important sur la qualité et les performances finales du système développé. En effet, les composants de mémoire d’un système embarqué peuvent occuper jusqu’à 80% de la surface totale de silicium et être responsable d’une majeure partie de la consommation énergétique. Malgré cela, les limitations mémoires restent une contrainte forte augmentant considérablement les temps de développements. Les modèles de calcul de type flux de données sont couramment utilisés pour la spécification, l’analyse et l’optimisation d’applications de traitement du signal. La popularité de ces modèles est due à leur bonne analysabilité ainsi qu’à leur prédisposition à exprimer le parallélisme des applications. L’abstraction de toute notion de temps dans les diagrammes flux de données facilite l’exploitation du parallélisme offert par les architectures multi-coeurs hétérogènes. Dans cette thèse, nous présentons une méthode complète pour l’étude des caractéristiques mémoires d’applications de traitement du signal modélisées par des diagrammes flux de données. La méthode proposée couvre la caractérisation théorique d’applications, indépendamment des architectures ciblées, jusqu’à l’allocation quasi-optimale de ces applications en mémoire partagée d’architectures multi-coeurs embarquées. L’implémentation de cette méthode au sein d’un outil de prototypage rapide permet son évaluation sur des applications récentes de vision par ordinateur, de télécommunication, et de multimédia. Certaines applications de traitement du signal au comportement très dynamique ne pouvant être modélisé par le modèle de calcul supporté par notre méthode, nous proposons un nouveau méta-modèle de type flux de données répondant à ce besoin. Ce nouveau méta-modèle permet la modélisation d’applications reconfigurables et modulaires tout en préservant la prédictibilité, la concision et la lisibilité des diagrammes de flux de données. / The development of embedded Digital Signal Processing (DSP) applications for Multiprocessor Systems-on-Chips (MPSoCs) is a complex task requiring the consideration of many constraints including real-time requirements, power consumption restrictions, and limited hardware resources. To satisfy these constraints, it is critical to understand the general characteristics of a given application: its behavior and its requirements in terms of MPSoC resources. In particular, the memory requirements of an application strongly impact the quality and performance of an embedded system, as the silicon area occupied by the memory can be as large as 80% of a chip and may be responsible for a major part of its power consumption. Despite the large overhead, limited memory resources remain an important constraint that considerably increases the development time of embedded systems. Dataflow Models of Computation (MoCs) are widely used for the specification, analysis, and optimization of DSP applications. The popularity of dataflow MoCs is due to their great analyzability and their natural expressivity of the parallelism of a DSP application. The abstraction of time in dataflow MoCs is particularly suitable for exploiting the parallelism offered by heterogeneous MPSoCs. In this thesis, we propose a complete method to study the important aspect of memory characteristic of a DSP application modeled with a dataflow graph. The proposed method spans the theoretical, architecture-independent memory characterization to the quasi-optimal static memory allocation of an application on a real shared-memory MPSoC. The proposed method, implemented as part of a rapid prototyping framework, is extensively tested on a set of state-of-the-art applications from the computer-vision, the telecommunication, and the multimedia domains. Then, because the dataflow MoC used in our method is unable to model applications with a dynamic behavior, we introduce a new dataflow meta-model to address the important challenge of managing dynamics in DSP-oriented representations. The new reconfigurable and composable dataflow meta-model strengthens the predictability, the conciseness and the readability of application descriptions.
199

Demodulação de sinais interferométricos de saída de sensor eletro-óptico de tensões elevadas utilizando processador digital de sinais

Pereira, Fernando da Cruz [UNESP] 31 October 2013 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2013-10-31Bitstream added on 2014-06-13T19:08:04Z : No. of bitstreams: 1 pereira_fc_me_ilha.pdf: 2358565 bytes, checksum: 4d967db226fa8d0d0a3f5e5792731645 (MD5) / O grupo de estudos do Laboratório de Optoeletrônica (LOE) da FEIS-UNESP trabalha há vários anos na área de interferometria óptica. A expressão geral da transmissão (razão entre o retardo de fase e a tensão aplicada) de um modulador eletro-óptico de intensidades é idêntica à expressão do sinal fotodetectado na saída de um interferômetro de dois feixes. Em 2012, um novo método de detecção interferométrica de fase óptica foi desenvolvido no LOE, sendo denominado de método de segmentação do sinal amostrado (SSA). Este método é imune ao fenômeno de desvanecimento, é capaz de mensurar o valor da diferença de fase quase-estática entre os braços do interferômetro, consegue medir o tempo de atraso entre o estímulo e a resposta, é pouco sensível ao ruído eletrônico, apresenta excelente resolução, tem ampla faixa dinâmica, permite caracterizar dispositivos não-lineares e pode operar com uma grande variedade de sinais periódicos não-senoidais. Beneficiando-se dessas informações, promoveu-se uma adaptação do método SSA para fins de se implementar um sensor óptico de tensão (SOT) elevada, a base do efeito eletro-óptico linear em cristais de niobato de lítio. O trabalho desenvolvido nesta dissertação se insere nesta linha de pesquisa, porém, ao contrário de trabalhos pregressos realizados no LOE, onde o sinal fotodetectado era amostrado por um osciloscópio digital e processado em microcomputador, agora, empregam-se processadores digitais de sinais (DSPs) tanto para amostrar quanto processar o sinal. Operando-se com a placa eZdspF28335, de ponto-flutuante, foram executadas medições da forma de onda de sinais de alta tensão, em 60 Hz e com elevado conteúdo de harmônicas superiores. Desta forma, gráficos de linearidade (relação entre o retardo induzido versus tensão elétrica aplicada)... / The Optoelectronic Laboratory (OEL) research group has been working for many years in the optical interferometry field. The general expression for the transmission (phase shift and drive voltage ratio) of an electro optic amplitude modulator is identical to the photo-detected signal at the output of a two-beam interferometer. In 2012, a new interferometry method for optical phase detection was developed at OEL, named Sampled Piece-Wise Signal (SPWS) method. This method, which is immune to fading, is used to measure the value of the quasi-static optical phase difference between the arms of the interferometer. The method has small influence from to electronic noise, provides excellent resolution, has a wide dynamic range, and allows the characterization of non-linear devices. Furthermore, the SPWS method is used to measure the time delay between stimulus and response and may operate with a wide variety of non-sinusoidal periodic signals. In this work the SPWS method is adjusted aiming the high voltage measurement by using an optical voltage sensor (OVT) based on the linear electro-optic effect in lithium niobate crystals. Unlike previous studies realized at OEL, where the photo-detected signal was acquired by a digital oscilloscope and processed with a microcomputer, a digital signal processor (DSPs) is employed for both signal acquisition and processing. Measurements of high voltage signal waveforms, at 60 Hz and with higher harmonic content, were performed using the eZdspF28335 card, with floating-point operation. Thus, OVT linearity (induced phase shift versus drive voltage) and frequency response curves were obtained. The spectrum of the high voltage signal was calculated, and hence, parameters such as THD (Total Harmonic Distortion) and IHD (Individual Harmonic Distortion) could be determined. Two different OVT configurations were tested... (Complete abstract click electronic access below)
200

M-ary frequency mapping techniques for power-line communications

Lukusa, Tedy Mpoyi 27 May 2013 (has links)
M.Ing. (Electrical and Electronic Engineering) / Power line communications have been in use since the early 1900‟s. The early use of this technology was mostly found within utility companies where it was used for intra telephonic service over the electrical distribution network. This technology has evolved remarkably to include not only low voltage medium and high voltage electric network but it has also extended to home automation and network. Literature on power line communications has pointed out major hindrances such as cable characteristics, impedance variations and noise signals from various sources. Most importantly, noisy characteristics of power line channels make it difficult to transmit information data in an effective and reliable way. More often data transmitted through power line channels is corrupted by three main types of noise, the background noise, the impulse noise and the permanent frequency disturbances. Consequently, researchers have focused on the optimum use of power line channel through combining channel coding and modulation schemes. In this study, we have, through simulations and practical experimentations, investigated the performance of a new mapping technique called “frequency mapping” over power line channel. The study material began with reviews of channel coding, modulation and permutation codes schemes. Further we presented through computer simulation, the inherent benefit of using permutation codes obtained through construction technique. Secondly, we detailed the use of Hadamard transform to produce frequency sequences. In reality, sign changes, drawn from observing Hadamard matrix and Walsh functions, were conceptualised as frequencies from which frequency sequences were produced. This technique termed “frequency mapping” showed effectiveness against narrow band noise in simulation environment. The study closed with an experimental verification of this new technique through custom designed communication system on a real power line channel where we observed a net BER performance gain when frequency sequences are ordered through Hadamard transform.

Page generated in 0.1124 seconds