1 |
A 8-bit 20-MS/s Pipeline ADC and A Low-Power 5-bit 2.4-MS/s Successive Approximation ADC for ZigBee ReceiversCheng, Kuang-Ting 07 July 2006 (has links)
The first topic of this thesis proposes an 8-bit, 20 MSample/s pipeline analog-to-digital converter (ADC). The sharing amplifiers technique is employed to reduce the overall number of the amplifiers wherein dynamic comparators are adopted to reduce the power consumption. The proposed design is implemented by 0.35 £gm CMOS technology. The simulation results show that maximum power consumption is 45 mW given a 3.3 V power supply, and the SFDR is 45 dB with a sinusoidal input at 5 MHz.
The second topic is to describe a 5-bit, 2.4 MSample/s, low power analog-to-digital converter for ZigBee receiver using 868/915 MHz band. The converter uses the successive approximation architecture. By using 0.18 £gm CMOS technology, the simulation results show the worst-case power consumption is merely 449.6 £gW. The converter achieves the maximum differential nonlinearity of 0.3 LSB, the maximum integral nonlinearity of 0.5 LSB.
|
2 |
The Discrimination of Successive Sensory ImpulsesAbel, Sharon Mildred 10 1900 (has links)
<p> Preliminary experiments were conducted to evaluate a theoretical model for predicting temporal numerosity data. The model was based on a hypothetical central unit of duration. It described the gating of sequentially presented auditory pulses. Experiment 1 offered partial
support for the prediction that events occurring within one unit 50 milliseconds in duration would be perceived as simultaneous. Results of Experiment 2 suggested that empty duration units occurring between sequential events would not affect number reported. The estimate of
the unit was 60 milliseconds. Experiment 3, an attempt to improve methodology, suggested values of the unit of approximately 75 and 106 milliseconds.</p> <p> Inadequacies of the model were discussed. Control experiments were considered to eliminate such cues for discrimination as duration and intensity differences.</p> / Thesis / Master of Arts (MA)
|
3 |
Décodage de codes polaires sur des architectures programmables / Polar decoding on programmable architectures.Léonardon, Mathieu 13 December 2018 (has links)
Les codes polaires constituent une classe de codes correcteurs d’erreurs inventés récemment qui suscite l’intérêt des chercheurs et des industriels, comme en atteste leur sélection pour le codage des canaux de contrôle dans la prochaine génération de téléphonie mobile (5G). Un des enjeux des futurs réseaux mobiles est la virtualisation des traitements numériques du signal, et en particulier les algorithmes de codage et de décodage. Afin d’améliorer la flexibilité du réseau, ces algorithmes doivent être décrits de manière logicielle et être déployés sur des architectures programmables. Une telle infrastructure de réseau permet de mieux répartir l’effort de calcul sur l’ensemble des noeuds et d’améliorer la coopération entre cellules. Ces techniques ont pour but de réduire la consommation d’énergie, d’augmenter le débit et de diminuer la latence des communications. Les travaux présentés dans ce manuscrit portent sur l’implémentation logicielle des algorithmes de décodage de codes polaires et la conception d’architectures programmables spécialisées pour leur exécution.Une des caractéristiques principales d’une chaîne de communication mobile est l’instabilité du canal de communication. Afin de remédier à cette instabilité, des techniques de modulations et de codages adaptatifs sont utilisées dans les normes de communication.Ces techniques impliquent que les décodeurs supportent une vaste gamme de codes : ils doivent être génériques. La première contribution de ces travaux est l’implémentation logicielle de décodeurs génériques des algorithmes de décodage "à Liste" sur des processeurs à usage général. En plus d’être génériques, les décodeurs proposés sont également flexibles.Ils permettent en effet des compromis entre pouvoir de correction, débit et latence de décodage par la paramétrisation fine des algorithmes. En outre, les débits des décodeurs proposés atteignent les performances de l’état de l’art et, dans certains cas, les dépassent.La deuxième contribution de ces travaux est la proposition d’une nouvelle architecture programmable performante spécialisée dans le décodage de codes polaires. Elle fait partie de la famille des processeurs à jeu d’instructions dédiés à l’application. Un processeur de type RISC à faible consommation en constitue la base. Cette base est ensuite configurée,son jeu d’instructions est étendu et des unités matérielles dédiées lui sont ajoutées. Les simulations montrent que cette architecture atteint des débits et des latences proches des implémentations logicielles de l’état de l’art sur des processeurs à usage général. La consommation énergétique est réduite d’un ordre de grandeur. En effet, lorsque l’on considère le décodage par annulation successive d’un code polaire (1024,512), l’énergie nécessaire par bit décodé est de l’ordre de 10 nJ sur des processeurs à usage général contre 1 nJ sur les processeurs proposés.La troisième contribution de ces travaux est également une architecture de processeur à jeu d’instructions dédié à l’application. Elle se différencie de la précédente par l’utilisation d’une méthodologie de conception alternative. Au lieu d’être basée sur une architecture de type RISC, l’architecture du processeur proposé fait partie de la classe des architectures déclenchées par le transport. Elle est caractérisée par une plus grande modularité qui permet d’améliorer très significativement l’efficacité du processeur. Les débits mesurés sont alors supérieurs à ceux obtenus sur les processeurs à usage général. La consommation énergétique est réduite à environ 0.1 nJ par bit décodé pour un code polaire (1024,512) avec l’algorithme de décodage par annulation successive. Cela correspond à une réduction de deux ordres de grandeur en comparaison de la consommation mesurée sur des processeurs à usage général. / Polar codes are a recently invented class of error-correcting codes that are of interest to both researchers and industry, as evidenced by their selection for the coding of control channels in the next generation of cellular mobile communications (5G). One of the challenges of future mobile networks is the virtualization of digital signal processing, including channel encoding and decoding algorithms. In order to improve network flexibility, these algorithms must be written in software and deployed on programmable architectures.Such a network infrastructure allow dynamic balancing of the computational effort across the network, as well as inter-cell cooperation. These techniques are designed to reduce energy consumption, increase through put and reduce communication latency. The work presented in this manuscript focuses on the software implementation of polar codes decoding algorithms and the design of programmable architectures specialized in their execution.One of the main characteristics of a mobile communication chain is that the state of communication channel changes over time. In order to address issue, adaptive modulationand coding techniques are used in communication standards. These techniques require the decoders to support a wide range of codes : they must be generic. The first contribution of this work is the software implementation of generic decoders for "List" polar decoding algorithms on general purpose processors. In addition to their genericity, the proposed decoders are also flexible. Trade-offs between correction power, throughput and decodinglatency are enabled by fine-tuning the algorithms. In addition, the throughputs of the proposed decoders achieve state-of-the-art performance and, in some cases, exceed it.The second contribution of this work is the proposal of a new high-performance programmable architecture specialized in polar code decoding. It is part of the family of Application Specific Instruction-set Processors (ASIP). The base architecture is a RISC processor. This base architecture is then configured, its instruction set is extended and dedicated hardware units are added. Simulations show that this architecture achieves through puts and latencies close to state-of-the-art software implementations on generalpurpose processors. Energy consumption is reduced by an order of magnitude. The energy required per decoded bit is about 10 nJ on general purpose processors compared to 1nJ on proposed processors when considering the Successive Cancellation (SC) decoding algorithm of a polar code (1024,512).The third contribution of this work is also the design of an ASIP architecture. It differs from the previous one by the use of an alternative design methodology. Instead of being based on a RISC architecture, the proposed processor architecture is part of the classof Transport Triggered Architectures (TTA). It is characterized by a greater modularity that allows to significantly improve the efficiency of the processor. The measured flowrates are then higher than those obtained on general purpose processors. The energy consumption is reduced to about 0.1 nJ per decoded bit for a polar code (1024,512) with the SC decoding algorithm. This corresponds to a reduction of two orders of magnitude compared to the consumption measured on general purpose processors.
|
4 |
Competition in successive oligopoliesZanaj, Skerdilajda 18 April 2008 (has links)
Successive markets constitute a natural framework to study the value chain. This chain is built through the technological linkage between markets where inputs and the corresponding outputs are produced. If goods pass through a chain of imperfectly competitive markets, in excess of the value markups are also added, at each step, to the costs. This thesis firstly proposes a unified framework to analyze competition in successive oligopolies. Analyzing and developing such a general framework forms a basis for the analysis of entry of new firms and of collusive agreements in the same market, like horizontal mergers, or through different markets, like vertical integration. The results bring new insights on equilibrium outcomes of both collusive agreements and entry of new firms.
|
5 |
Growth and characterization of zinc oxide (ZnO) nanostructures for photovotaic applications / Croissance et caractérisation des nanostructure de l’oxyde de zinc (ZnO) pour des applications photovoltaïquesEl Zein, Basma 07 November 2012 (has links)
Le développement des nanotechnologies offre de nouvelles perspectives pour la conception des cellules solaire à fort rendement de conversion. Jusqu’à présent les efforts se sont portés principalement sur des structures à base de semi-conducteurs, de métaux et de polymères. Dans nos travaux, nous avons considéré des nanoparticules de Sulfure de Plomb (PbS) pour lesquelles l'énergie de bande interdite et les propriétés optiques sont fonction de la taille de la particule afin de tirer parti de l'ensemble du spectre optique couvert par l'énergie solaire. Nous avons également considéré des nanofils d'oxyde de zinc (ZnO) pour la séparation et le transport des charges photo-crées. Nous pensons que l'association des nanoparticules de PbS avec des nanofils de ZnO devrait pouvoir augmenter considérablement le rendement des cellules solaires. Dans ce but, nous avons démontré la croissance auto-ordonné des nanofils de ZnO sur substrats silicium et verre par dépôt laser pulsé (pulsed laser deposition ) utilisant le réseau de nanoparois de ZnO en forme de nid d'abeille comme couche germe. Nous avons démontré que les conditions de croissance sont essentielles pour contrôler la cristallinité, la morphologie des nanofils de ZnO , ainsi que la densité de défauts de croissance. Les analyses MEB, DRX, TEM, et HR-TEM montrent que nous avons obtenu des nanostructures très cristallines et orientées verticalement. Nous avons également démontré la croissance in-situ de nanoparticules de PbS sans ligand sur la surface des nanofils de ZnO verticaux à l'aide de la technique SILAR (Successive Ionic Layer Adsoprtion and Reaction) .Nous avons constaté que les nanoparticules de PbS sont fortement accrochées à la surface des nanofils de ZnO avec différentes dimensions et des densités variables .Ces résultats ont été obtenus sans introduire de matière organique (Ligand) qui pourrait perturber à la fois la structure électronique à l'interface ZnO/PbS et le transfert des électrons du PbS au ZnO. Les analyses MEB, TEM et HR-TEM confirment le bon accrochage des nanoparticules de PbS sur les nanofils de ZnO . Leur forme est sphérique et elles sont poly-cristallines. A la fin de ce travail de thèse nous proposons une hétérojonction p-PbS/n-ZnO constituée de nanoparticules de PbS dopées P et de nanofils de ZnO dopés n pour de futures applications en photovoltaïque. / To date, the development of nanotechnology has launched new ways to design efficient solar cells. Strategies have been employed to develop nanostructure architectures of semiconductors, metals, and polymers for solar cells. In this research we have considered the Lead sulfide (PbS) nanoparticles with their tunable band gap and optical properties to harvest the entire solar spectrum which can improve the optical absorption, and charge generation. On the other hand, Zinc oxide (ZnO) nanowires will provide the charge separation and transportation. The ZnO Nanowires sensitized with PbS nanoparticles might significantly impact power conversion efficiency of the solar cells Driven by these unique properties, we demonstrate the successful growth of self catalyzed ZnO nanowires on silicon and glass substrates, by pulsed laser deposition (PLD) using ZnO nanowall network with honeycomb structure as seed layer. We identified that the growth parameters are vital to control the crystallinity, morphology and the defect levels in the synthesized ZnO nanowires. SEM, XRD, TEM, HRTEM analysis show that the nanostructures are highly crystalline and are vertically oriented. We also report the in-situ growth of PbS nanoparticles without linker on the surface of well –oriented ZnO NWs by (SILAR) technique. The PbS Nanoparticles are packed tightly on the surface of the ZnO Nanowires with different sizes and densities, without insulating nature organic ligands, which might affect both the electronic structure at the interface and the electron - transfer rate. The SEM, TEM, HRTEM, PL and XRD analysis, confirm the attachment of the spherical shape polycrystalline PbS nanoparticles. We propose at the end of the thesis the p-PbS /n-ZnO hetero-junction with its future applications in solar cells.
|
6 |
Competition in successive oligopoliesZanaj, Skerdilajda 18 April 2008 (has links)
Successive markets constitute a natural framework to study the value chain. This chain is built through the technological linkage between markets where inputs and the corresponding outputs are produced. If goods pass through a chain of imperfectly competitive markets, in excess of the value markups are also added, at each step, to the costs. This thesis firstly proposes a unified framework to analyze competition in successive oligopolies. Analyzing and developing such a general framework forms a basis for the analysis of entry of new firms and of collusive agreements in the same market, like horizontal mergers, or through different markets, like vertical integration. The results bring new insights on equilibrium outcomes of both collusive agreements and entry of new firms.
|
7 |
A 12-bit, 10 Msps two stage SAR-based pipeline ADCGandara, Miguel Francisco 23 April 2013 (has links)
The market for battery powered communications devices has grown significantly in recent years. These devices require a large number of analog to digital converters (ADCs) to transform wireless and other physical data into the digital signals required for digital signal processing elements and micro-processors. For these applications, power efficiency and accuracy are of the
utmost importance. Successive approximation register (SAR) ADCs are frequently used in power constrained applications, but their main limitation is their low sampling rate. In this work, a two stage pipelined ADC is presented
that attempts to mitigate some of the sampling rate limitations of a SAR while maintaining its power and resolution advantages. Special techniques are used to reduce the overall sampling capacitance required in both SAR stages and
to increase the linearity of the multiplying digital to analog converter (MDAC) output. The SAR sampling network, control logic, and MDAC blocks are completely
implemented. Ideal components were used for the clocking, comparators, and switches. At the end of this design, a figure of merit of 51 fJ/conversion-step was achieved. / text
|
8 |
Successive Estimation Method of Locating Dipoles based on QR Decomposition using EEG ArraysWang, Yiming 07 1900 (has links)
<p> EEG is a noninvasive technique useful for the human brain mapping and for the
estimation of neural electrical activities in human brain. A goal of processing EEG
signals of a subject is the localization of neural current sources in human brain known
as dipoles. Although this location estimation problem can be modeled as a particular
kind of parameter estimation problem as in array signal processing, the nonlinear
structure of an EEG electrode array, which is much more complicated than a traditional
sensor array, makes the problem more difficult. </p> <p> In this thesis, we formulate the inverse problem of the forward model on computing the scalp EEG at a finite set of sensors from multiple dipole sources. It is observed that the geometric structure of the EEG array plays a crucial role in ensuring a unique solution for this problem. We first present a necessary and sufficient condition
in the model of a single rotating dipole, that guarantees its location to be uniquely
determined, when the second-order statistic of the EEG observation is available. In
addition, for a single rotating dipole, a closed-form solution to uniquely determine its
position is obtained by exploiting the geometrical structure of the EEG array. </p> <p> In the case of multiple dipoles, we suggest the use of the Maximum Likelihood (ML) estimator, which is often considered optimum in parameter estimation. We propose an efficient localization algorithm based on QR decomposition. Depending on whether or not the probability density functions of the dipole amplitude and the noise are available, we utilize the non-coherent ML or the LS as the criterion to
develop a unified successive localization algorithm, so that solving the original multi-dipole optimization problem can be approximated by successively solving a series of single-dipole optimization problems. Numerical simulations show that our methods have much smaller estimation errors than the existing RAP-MUSIC method under non-ideal situations such as low SNR with small number of EEG sensors. </p> / Thesis / Master of Applied Science (MASc)
|
9 |
Implementation and evaluation of Polar Codes in 5G / Implementation och evaluering av Polar Codes för 5GRosenqvist, Tobias, Sloof, Joël January 2019 (has links)
In today’s society the ability to communicate with one another has grown, were a lot of focus is aimed towards speed in the telecommunication industry. For transmissions to become even faster, there are many ways to enhance transmission speeds of which error correction is one. Padding messages such that they are protected from noise, while using as few bits as possible and ensuring safe transmit is handled by error correction codes. Short codes with low complexity is a solution to faster transmission speeds. An error correction code which has gained a lot of attention since its first appearance in 2009 is Polar Codes. Polar Codes was chosen as the 3GPP standard for 5G control channel. The goal of the thesis is to develop and implement Polar Codes and rate matching according to the 3GPP standard 38.212. Polar Codes are then to be evaluated with different block sizes and rate matching settings. Finally Polar Code is compared with Convolutional code in a LTE-simulation environment. The performance evaluations are presented using BLER/(Eb/N0)-graphs. In this thesis a Polar encoder, rate matching and a Polar decoder (with Successive Cancellation algorithm) were successfully implemented. The simulation results show that Polar Codes performs better with longer block sizes and also has a better BLER-performance than Convolutional Codes when given the same message lengths.
|
10 |
Applying the "Split-ADC" Architecture to a 16 bit, 1 MS/s differential Successive Approximation Analog-to-Digital ConverterChan, Ka Yan 30 April 2008 (has links)
Successive Approximation (SAR) analog-to-digital converters are used extensively in biomedical applications such as CAT scan due to the high resolution they offer. Capacitor mismatch in the SAR converter is a limiting factor for its accuracy and resolution. Without some form of calibration, a SAR converter can only achieve 10 bit accuracy. In industry, the CAL-DAC approach is a popular approach for calibrating the SAR ADC, but this approach requires significant test time. This thesis applies the“Split-ADC" architecture with a deterministic, digital, and background self-calibration algorithm to the SAR converter to minimize test time. In this approach, a single ADC is split into two independent halves. The two split ADCs convert the same input sample and produce two output codes. The ADC output is the average of these two output codes. The difference between these two codes is used as a calibration signal to estimate the errors of the calibration parameters in a modified Jacobi method. The estimates are used to update calibration parameters are updated in a negative feedback LMS procedure. The ADC is fully calibrated when the difference signal goes to zero on average. This thesis focuses on the specific implementation of the“Split-ADC" self-calibrating algorithm on a 16 bit, 1 MS/s differential SAR ADC. The ADC can be calibrated with 105 conversions. This represents an improvement of 3 orders of magnitude over existing statistically-based calibration algorithms. Simulation results show that the linearity of the calibrated ADC improves to within ±1 LSB.
|
Page generated in 0.0634 seconds