• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 59
  • 16
  • 12
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 313
  • 313
  • 313
  • 313
  • 83
  • 68
  • 58
  • 49
  • 43
  • 32
  • 31
  • 29
  • 28
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Efficient digital baseband predistortion for modern wireless handsets

Ba, Seydou Nourou 10 November 2009 (has links)
This dissertation studies the design of an efficient adaptive digital baseband predistorter for modern cellular handsets that combines low power consumption, low implementation complexity, and high performance. The proposed enhancements are optimized for hardware implementation. We first present a thorough study of the optimal spacing of linearly-interpolated lookup table predistorters supported by theoretical calculations and extensive simulations. A constant-SNR compander that increases the predistorter's supported input dynamic range is derived. A corresponding low-complexity approximation that lends itself to efficient hardware design is also implemented in VHDL and synthesized with the Synopsys Design Compiler. This dissertation also proposes an LMS-based predistorter adaptation that is optimized for hardware implementation and compares the effectiveness of the direct and indirect learning architectures. A novel predistorter design with quadrature imbalance correction capability is developed and a corresponding adaptation scheme is proposed. This robust predistorter configuration is designed by combining linearization and I/Q imbalance correction into a single function with the same computational complexity as the widespread complex-gain predistorter.
282

Wireless receiver designs: from information theory to VLSI implementation

Zhang, Wei Zhang 06 October 2009 (has links)
Receiver design, especially equalizer design, in communications is a major concern in both academia and industry. It is a problem with both theoretical challenges and severe implementation hurdles. While much research has been focused on reducing complexity for optimal or near-optimal schemes, it is still common practice in industry to use simple techniques (such as linear equalization) that are generally significantly inferior. Although digital signal processing (DSP) technologies have been applied to wireless communications to enhance the throughput, the users' demands for more data and higher rate have revealed new challenges. For example, to collect the diversity and combat fading channels, in addition to the transmitter designs that enable the diversity, we also require the receiver to be able to collect the prepared diversity. Most wireless transmissions can be modeled as a linear block transmission system. Given a linear block transmission model assumption, maximum likelihood equalizers (MLEs) or near-ML decoders have been adopted at the receiver to collect diversity which is an important metric for performance, but these decoders exhibit high complexity. To reduce the decoding complexity, low-complexity equalizers, such as linear equalizers (LEs) and decision feedback equalizers (DFEs) are often adopted. These methods, however, may not utilize the diversity enabled by the transmitter and as a result have degraded performance compared to MLEs. In this dissertation, we will present efficient receiver designs that achieve low bit-error-rate (BER), high mutual information, and low decoding complexity. Our approach is to first investigate the error performance and mutual information of existing low-complexity equalizers to reveal the fundamental condition to achieve full diversity with LEs. We show that the fundamental condition for LEs to collect the same (outage) diversity as MLE is that the channels need to be constrained within a certain distance from orthogonality. The orthogonality deficiency (od) is adopted to quantify the distance of channels to orthogonality while other existing metrics are also introduced and compared. To meet the fundamental condition and achieve full diversity, a hybrid equalizer framework is proposed. The performance-complexity trade-off of hybrid equalizers is quantified by deriving the distribution of od. Another approach is to apply lattice reduction (LR) techniques to improve the ``quality' of channel matrices. We present two widely adopted LR methods in wireless communications, the Lenstra-Lenstra-Lovasz (LLL) algorithm [51] and Seysen's algorithm (SA), by providing detailed descriptions and pseudo codes. The properties of output matrices of the LLL algorithm and SA are also quantified. Furthermore, other LR algorithms are also briefly introduced. After introducing LR algorithms, we show how to adopt them into the wireless communication decoding process by presenting LR-aided hard-output detectors and LR-aided soft-output detectors for coded systems, respectively. We also analyze the performance of proposed efficient receivers from the perspective of diversity, mutual information, and complexity. We prove that LR techniques help to restore the diversity of low-complexity equalizers without increasing the complexity significantly. When it comes to practical systems and simulation tool, e.g., MATLAB, only finite bits are adopted to represent numbers. Therefore, we revisit the diversity analysis for finite-bit represented systems. We illustrate that the diversity of MLE for systems with finite-bit representation is determined by the number of non-vanishing eigenvalues. It is also shown that although theoretically LR-aided detectors collect the same diversity as MLE in the real/complex field, it may show different diversity orders when finite-bit representation exists. Finally, the VLSI implementation of the complex LLL algorithms is provided to verify the practicality of our proposed designs.
283

Current-Mode Techniques In The Synthesis And Applications Of Analog And Multi-Valued Logic In Mixed Signal Design

Bhat, Shankaranarayana M 11 1900 (has links)
The development of modern integration technologies is normally driven by the needs of digital CMOS circuit design. Rapid progress in silicon VLSI technologies has made it possible to implement multi-function and high performance electronic circuits on a single die. Coupled with this, the need for interfacing digital blocks to the external world resulted in the integration of analog blocks such as A/D and D/A converters, filters and oscillators with the digital logic on the same die. Thus, mixed signal system-on-chip (SOC) solutions are becoming a common practice in the present day integrated circuit (IC) technologies. In digital domain, aggressive technology scaling redefines, in many ways, the role of interconnects vis-`a-vis the logic in determining the overall performance. Apart from signal integrity, power dissipation and reliability issues, delays over long interconnects far exceeding the logic delay becomes a bottleneck in high speed operation. Moreover, with an increasing density of chips, the number of interchip connections is greatly increased as more and more functions are put on the same chip; thus, the size and performance of the chip are mostly dominated by wiring rather than devices. One of the most promising approaches to solve the above interconnection problems is the use of multiple-valued logic (MVL) inside the chip [Han93, Smi88]. The number of interconnections can be directly reduced with multiple valued signal representation. The reduced complexity of interconnections makes the chip area and delay much smaller leading to reduced cross talk noise and improved reliability. Thus, the inclusion of multiple-valued logic in a otherwise mixed design, consisting of analog and binary logic, can make the transition from analog to digital world much more smoother and at the same time improve the overall system performance. As the sizes of integrated devices decrease, maximum voltage ratings also rapidly decrease. Although decreased supply voltages do not restrict the design of digital circuits, it is harder to design high performance analog and multiple valued integrated circuits using new processes. As an alternative to voltage-mode signal processing, current-mode circuit techniques, which use current as a signal carrier, are drawing strong attention today due to their potential application in the design of high-speed mixed-signal processing circuits in low-voltage standard VLSI CMOS technologies. Industrial interest in this field has been propelled by the proposal of innovative ideas for filters, data converters and IC prototypes in the high frequency range [Tou90, Kol00]. Further, in MVL design using conventional CMOS processing, different current levels can be easily used to represent different logic values. Thus the case for an integrated approach to the design of analog, multi-valued and binary logic circuits using current-mode techniques seems to be worth considering. The work presented in this thesis is an effort to reaffirm the utility of current-mode circuit techniques to some of the existing as well as to some new areas of circuit design. We present new algorithms for the synthesis of a class of analog and multiple-valued logic circuits assuming an underlying CMOS current-mode building blocks. Next we present quaternary current-mode signaling scheme employing a simple encoder and decoder architecture for improving the signal delay characteristics of long interconnects in digital logic blocks. As an interface between analog and digital domain, we present an architecture of current-mode flash A/D converter. Finally, low power being a dominant design constraint in today IC technology, we present a scheme for static power minimization in a class of Current-mode circuits.
284

Algorithms for image segmentation in fermentation.

Mkolesia, Andrew Chikondi. January 2011 (has links)
M. Tech. Mathematical Technology. / Aims of this research project is to mathematically analyse froth patterns and build a database of the images at different stages of the fermentation process, so that a decision-making procedure can be developed, which enables a computer to react according to what has been observed. This would allow around-the-clock observation which is not possible with humans. In addition, mechanised decision-making would minimize errors usually associated with human actions. Different mathematical algorithms for image processing will be considered and compared. These algorithms have been designed for different image processing situations. In this dissertation the algorithms will be applied to froth images in particular and will be used to simulate the human eye for decision-making in the fermentation process. The preamble of the study will be to consider algorithms for the detection of edges and then analyse these edges. MATLAB will be used to do the pre-processing of the images and to write and test any new algorithms designed for this project.
285

Dynamic compressive sensing: sparse recovery algorithms for streaming signals and video

Asif, Muhammad Salman 20 September 2013 (has links)
This thesis presents compressive sensing algorithms that utilize system dynamics in the sparse signal recovery process. These dynamics may arise due to a time-varying signal, streaming measurements, or an adaptive signal transform. Compressive sensing theory has shown that under certain conditions, a sparse signal can be recovered from a small number of linear, incoherent measurements. The recovery algorithms, however, for the most part are static: they focus on finding the solution for a fixed set of measurements, assuming a fixed (sparse) structure of the signal. In this thesis, we present a suite of sparse recovery algorithms that cater to various dynamical settings. The main contributions of this research can be classified into the following two categories: 1) Efficient algorithms for fast updating of L1-norm minimization problems in dynamical settings. 2) Efficient modeling of the signal dynamics to improve the reconstruction quality; in particular, we use inter-frame motion in videos to improve their reconstruction from compressed measurements. Dynamic L1 updating: We present homotopy-based algorithms for quickly updating the solution for various L1 problems whenever the system changes slightly. Our objective is to avoid solving an L1-norm minimization program from scratch; instead, we use information from an already solved L1 problem to quickly update the solution for a modified system. Our proposed updating schemes can incorporate time-varying signals, streaming measurements, iterative reweighting, and data-adaptive transforms. Classical signal processing methods, such as recursive least squares and the Kalman filters provide solutions for similar problems in the least squares framework, where each solution update requires a simple low-rank update. We use homotopy continuation for updating L1 problems, which requires a series of rank-one updates along the so-called homotopy path. Dynamic models in video: We present a compressive-sensing based framework for the recovery of a video sequence from incomplete, non-adaptive measurements. We use a linear dynamical system to describe the measurements and the temporal variations of the video sequence, where adjacent images are related to each other via inter-frame motion. Our goal is to recover a quality video sequence from the available set of compressed measurements, for which we exploit the spatial structure using sparse representations of individual images in a spatial transform and the temporal structure, exhibited by dependencies among neighboring images, using inter-frame motion. We discuss two problems in this work: low-complexity video compression and accelerated dynamic MRI. Even though the processes for recording compressed measurements are quite different in these two problems, the procedure for reconstructing the videos is very similar.
286

Mathematical analysis of a dynamical system for sparse recovery

Balavoine, Aurele 22 May 2014 (has links)
This thesis presents the mathematical analysis of a continuous-times system for sparse signal recovery. Sparse recovery arises in Compressed Sensing (CS), where signals of large dimension must be recovered from a small number of linear measurements, and can be accomplished by solving a complex optimization program. While many solvers have been proposed and analyzed to solve such programs in digital, their high complexity currently prevents their use in real-time applications. On the contrary, a continuous-time neural network implemented in analog VLSI could lead to significant gains in both time and power consumption. The contributions of this thesis are threefold. First, convergence results for neural networks that solve a large class of nonsmooth optimization programs are presented. These results extend previous analysis by allowing the interconnection matrix to be singular and the activation function to have many constant regions and grow unbounded. The exponential convergence rate of the networks is demonstrated and an analytic expression for the convergence speed is given. Second, these results are specialized to the L1-minimization problem, which is the most famous approach to solving the sparse recovery problem. The analysis relies on standard techniques in CS and proves that the network takes an efficient path toward the solution for parameters that match results obtained for digital solvers. Third, the convergence rate and accuracy of both the continuous-time system and its discrete-time equivalent are derived in the case where the underlying sparse signal is time-varying and the measurements are streaming. Such a study is of great interest for practical applications that need to operate in real-time, when the data are streaming at high rates or the computational resources are limited. As a conclusion, while existing analysis was concentrated on discrete-time algorithms for the recovery of static signals, this thesis provides convergence rate and accuracy results for the recovery of static signals using a continuous-time solver, and for the recovery of time-varying signals with both a discrete-time and a continuous-time solver.
287

Real time extraction of ECG fiducial points using shape based detection

Darrington, John Mark January 2009 (has links)
The electrocardiograph (ECG) is a common clinical and biomedical research tool used for both diagnostic and prognostic purposes. In recent years computer aided analysis of the ECG has enabled cardiographic patterns to be found which were hitherto not apparent. Many of these analyses rely upon the segmentation of the ECG into separate time delimited waveforms. The instants delimiting these segments are called the
288

Desenvolvimento de um demodulador digital e de um ambiente de simulaçao para sistema de telemedidas / Development of a digital demodulator and a simulation environment for a telemetry system

Okajima, Henri Shinichi de Souza 16 August 2018 (has links)
Orientador: Luís Geraldo Pedroso Meloni / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-16T06:39:32Z (GMT). No. of bitstreams: 1 Okajima_HenriShinichideSouza_M.pdf: 2252019 bytes, checksum: df6b064fa2391bdd5665b43e140c56b1 (MD5) Previous issue date: 2010 / Resumo: Esta dissertação apresenta os resultados obtidos com a pesquisa e implementação de um sistema de demodulação para o receptor de rastreio de um radar de telemedidas. Um radar de telemedidas é responsável pela identificação de um conjunto de medidas realizadas no objeto espacial e enviadas para a antena através de um transponder. A antena de telemedidas deve rastrear o objeto, mantendo-se sempre apontada na direção deste. Para realizar esta função foi utilizada a técnica de monopulso de um canal. Na técnica de monopulso de um canal, cabe ao demodulador digital do receptor executar uma identificação de envoltória e uma demultiplexação temporal que deve permitir encontrar os valores angulares dos erros. A implementação resultou em uma placa de demodulador digital, realizada com um Field Programmable Gate Array (FPGA) da família Cyclone II, e um controlador Freescale, embarcados em uma Placa de Circuito Impresso (PCI) de quatro camadas, projetada para interfacear sinais digitais para controle do sistema de telemedidas e para condicionar os sinais analógicos para posterior processamento. Além de ter interface com placas específicas (por exemplo, CAF - Controle automático de freqüência, CAG - controle automático de ganho, Gerador de Teste, etc), possui também uma interface Controller Area Network (CAN) para comunicação com os módulos de controle de servomecanismos da antena e de interface usuário. Foi desenvolvido também um ambiente de simulação para o demodulador digital em Matlab permitindo verificar a coerência com os resultados esperados e traçar cenários de testes / Abstract: This project presents the results obtained by the research and development of a Demodulation System for a telemetry radar tracking receiver. A telemetry radar system is responsible for identifying a set of measures taken from a spatial artifact and is transmitted by a transponder to its antenna. The telemetry antenna must track the spatial object, maintaining the antenna pointing in the correct direction. To execute this function a single channel monopulse technique is applied. Since the single channel monopulse technique is used, a digital demodulator task is then run for amplitude identification and the de-multiplexing time frame must occur in order to calculate the angle values of errors. This process is explained during the dissertation after the presentation of the main characteristics of radars and some aspects of telemetry systems. The solution is a digital demodulator electronic board, build with an FPGA (Field Programmable Gate Array) from Altera Cyclone II® family, and a Freescale® controller, over a multilayer PCB (Printed Circuit Board) projected to interface with digital signals for the Telemetry Control System and to conditioning analogical signals for processing tasks. The developed board has the CAN (Controller Area Network) interface to communicate with the servomechanism control modules associated with the Antenna and is placed in an armored drawer - to avoid electromagnetic noises - as well as to interact with other specific board functions.A simulation environment was achieved for the digital demodulator in Matlab, allowing the results verification and allowing to establish others testing scenarios / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
289

Processamento largamente linear em arranjo de antenas = proposta, avaliação e implementação prática de algoritmos / Widely linear processing in antenna arrays : proposal, evaluation and practical implementation of algorithms

Chinatto Júnior, Adilson Walter 02 November 2011 (has links)
Orientadores: João Marcos Travassos Romano, Cynthia Cristina Martins Junqueira / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-17T17:05:38Z (GMT). No. of bitstreams: 1 ChinattoJunior_AdilsonWalter_M.pdf: 7887281 bytes, checksum: 23d4e34e4a2b77a46e1707773d7b5fdf (MD5) Previous issue date: 2011 / Resumo: O Processamento Largamente Linear, desenvolvido durante a década de 1990, tem levado a uma melhoria no desempenho de algoritmos adaptativos para determinadas situações que empregam sinais impróprios. Quando aplicado a arranjos de antenas, esse tipo de processamento apresenta a potencialidade ser mais robusto e eficiente que as técnicas clássicas de filtragem. Dessa forma, este trabalho busca estender uma série de algoritmos adaptativos clássicos de conformação de feixe para a forma largamente linear, verificando através de simulações os eventuais ganhos em desempenho obtidos na tarefa de mitigação de interferentes através de arranjos de antenas. São avaliados algoritmos treinados, com restrições e cegos, cobrindo um leque relativamente amplo de cenários de utilização. Visando o uso de arranjos de antenas em cenários em que os sinais incidentes possuam modulação real, são propostas otimizações para os algoritmos largamente lineares que levam a uma redução da complexidade computacional, mantendo o desempenho do algoritmo original. Essas otimizações são aplicadas para algoritmos treinados, com restrições e cegos, sendo seus desempenhos comparados através de simulações com os desempenhos obtidos através dos algoritmos largamente lineares originais e dos algoritmos estritamente lineares. Por fim, uma plataforma para testes de arranjos de antenas é implementada em hardware provido de dispositivo de lógica programável (FPGA), permitindo que sejam realizados ensaios práticos envolvendo caracterização de antenas, conformação de feixe não adaptativa e mitigação de interferentes através de algoritmos adaptativos / Abstract: Widely Linear Processing, developed during the 1990s, has led to an improved performance of adaptive algorithms under certain situations that involve improper signals. When applied to antenna arrays, this type of processing shows to be potentially more robust and efficient than the classical filtering techniques. The objective of this work is to extend several classic adaptive beamforming algorithms to the widely linear form, verifying by means of simulations the potential gains in performance when applied to the task of mitigating interference in antenna arrays. Trained, restricted and blind algorithms are considered, covering a relatively broad range of feasible scenarios. Addressing the use of antenna arrays in scenarios in which the incident signals involved have real modulation, optimizations for the widely linear algorithms are proposed, thereby promoting reductions in the computational complexity, while maintaining the original algorithm performance. These optimizations are applied to trained, restricted and blind algorithms, and their performance is compared through simulations with the performances obtained using the original algorithms in their largely linear and strictly linear versions. Finally, an antenna array test platform is implemented in the hardware, allowing practical tests to be carried out. A set of measures taken with the antenna array test platform is exhibited, which include characterization of antennas, non-adaptive beamforming and interference mitigation using adaptive algorithms / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
290

Sistema de monitoramento de falhas em tubulações por meio de processamento digital de sinais / Monitoring fails in tubes using signals digital processing

Berto Junior, Carlos Antonio 16 May 2008 (has links)
Orientadores: Elias Basile Tambourgi, Sergio Ricardo Lourenço / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica / Made available in DSpace on 2018-08-11T04:10:47Z (GMT). No. of bitstreams: 1 BertoJunior_CarlosAntonio_M.pdf: 2955531 bytes, checksum: cfd5bb39409050bc06069b54171a03f9 (MD5) Previous issue date: 2008 / Resumo: O gás natural possui alguns contaminantes que, além de serem corrosivos, comprometem a qualidade para o consumo. Dessa forma, a condensação de água residual no gás pode iniciar um processo corrosivo localizado, que pode acarretar prejuízo à estrutura dos gasodutos. Devido à grande extensão dos dutos os corrosivos comprometem a qualidade do gás e causam grandes transtornos de ordem operacional. Para avaliar a redução da espessura da parede metálica do duto, proveniente de efeitos corrosivos, e identificar fissuras e outras nãoconformidades, é fundamental que seja feito o monitoramento contínuo e que se utilizem técnicas e métodos de manutenção preditiva. Atualmente as técnicas adotadas para tal avaliação consistem na inclusão de um corpo de prova, conhecido como pipeline inspection gauge (PIG), com varredura por meio de ultra-som, termografia, sensores ópticos, sensores de efeito Hall e sensores para análise de resistência elétrica, além de levantamentos de campo especiais realizados sobre a superfície do solo. Assim, o presente trabalho teve como norteador a otimização do processo de detecção, com vistas à redução de custos e precisão na identificação das falhas. Para tal, foi implementado um PIG autônomo para o monitoramento contínuo da região interna dos dutos dotado de câmeras infra-vermelho, o que diferencia este equipamento dos atuais para o mesmo fim. As câmeras fornecem imagens que são processadas digitalmente e gravadas em uma memória não-volátil presente no equipamento. Um software é utilizado para verificar as imagens e, ao mesmo tempo, identificar as não-conformidades presentes. Estas informações serão utilizadas como orientador na tomada de decisão acerca do processo de manutenção que deverá ser utilizado para a solvência dos problemas encontrados / Abstract: Natural gas has hazardous contaminants that, besides being corrosive, compromise the quality for the consumption. For in such a way the present residual water condensation in the gas can start a local corrosive process, which could cause damage to the structure of the gas ducts. Due to the great length of a gas duct, corrosive components decrease its quality and cause great operational problems. To evaluate the reduction of the metallic wall thickness for duct, proceeding from corrosive effect, and to identify to fictions and other notconformity, it is basic that the continuous monitoring is made and that techniques and methods of predictive maintenance are used. Currently the techniques adopted for such evaluation consist the inclusion of a body test, known as pipeline inspection gauge (PIG), with sweepings by means of ultrasound, thermograph, optical sensor, hall-effect and analysis of electric resistance, beyond carried through special surveys of field on the surface of the ground. Thus, the present work had as optimization of the detention process, with sights to the reduction of costs and precision in the identification of imperfections. For such, a PIG independent for the continuous monitoring for internal region in ducts was implemented endowed with cameras infra-red, what it the same differentiates this equipment of the current ones for end. The cameras supply recorded images that are processed digitally and in a present not-volatile memory in the equipment. A software is used to verify the images and, at the same time, to identify to notconformity gifts. These information will be used as orienting in the taking of decision concerning the maintenance process that will have to be used for the solution of the joined problems / Mestrado / Sistemas de Processos Quimicos e Informatica / Mestre em Engenharia Química

Page generated in 0.1387 seconds