271 |
Design methodologies for built-in testing of integrated RF transceivers with the on-chip loopback techniqueOnabajo, Marvin Olufemi 15 May 2009 (has links)
Advances toward increased integration and complexity of radio frequency (RF) andmixed-signal integrated circuits reduce the effectiveness of contemporary testmethodologies and result in a rising cost of testing. The focus in this research is on thecircuit-level implementation of alternative test strategies for integrated wirelesstransceivers with the aim to lower test cost by eliminating the need for expensive RFequipment during production testing.The first circuit proposed in this thesis closes the signal path between the transmitterand receiver sections of integrated transceivers in test mode for bit error rate analysis atlow frequencies. Furthermore, the output power of this on-chip loopback block wasmade variable with the goal to allow gain and 1-dB compression point determination forthe RF front-end circuits with on-chip power detectors. The loopback block is intendedfor transceivers operating in the 1.9-2.4GHz range and it can compensate for transmitterreceiveroffset frequency differences from 40MHz to 200MHz. The measuredattenuation range of the 0.052mm2 loopback circuit in 0.13µm CMOS technology was 26-41dB with continuous control, but post-layout simulation results indicate that theattenuation range can be reduced to 11-27dB via optimizations.Another circuit presented in this thesis is a current generator for built-in testing ofimpedance-matched RF front-end circuits with current injection. Since this circuit hashigh output impedance (>1k up to 2.4GHz), it does not influence the input matchingnetwork of the low-noise amplifier (LNA) under test. A major advantage of the currentinjection method over the typical voltage-mode approach is that the built-in test canexpose fabrication defects in components of the matching network in addition to on-chipdevices. The current generator was employed together with two power detectors in arealization of a built-in test for a LNA with 14% layout area overhead in 0.13µm CMOStechnology (<1.5% for the 0.002mm2 current generator). The post-layout simulationresults showed that the LNA gain (S21) estimation with the external matching networkwas within 3.5% of the actual gain in the presence of process-voltage-temperaturevariations and power detector imprecision.
|
272 |
Cooper pair box circuits: two-qubit gate, qubit single-shot readout, and current to frequency conversionNguyen, François 15 December 2008 (has links) (PDF)
Cette thèse porte sur le développement de circuits supraconducteurs à jonctions Josephson, issus de la boîte à paire de Cooper, pour réaliser des bits quantiques (qubits). La version quantronium de ce circuit avait déjà démontré une cohérence quantique assez bonne pour faire des portes logiques à un qubit. Pour réaliser des portes logiques à deux qubits, nous avons développé un circuit, le quantroswap, fait de deux quantroniums couplés, chaque qubit pouvant être piloté et mesuré séparément. Nous avons démontré l'échange cohérent d'état entre les deux qubits, mais aussi observé un effet rédhibitoire d'instabilité dans ces qubits. Pour l'éviter, nous avons réalisé un nouveau circuit fait d'une boite à paires de Cooper insensible au bruit en charge électrique et stable, couplée à un résonateur non linéaire pour sa lecture. Nous avons obtenu un temps de cohérence long (~1 μs), et une très bonne fidélité de lecture (90%) du qubit en utilisant le phénomène de bifurcation. Dans un but métrologique, la mesure par réflectométrie microonde du quantronium a aussi permis de relier un courant I injecté dans le circuit à la fréquence f=I/2e des oscillations de Bloch induites.
|
273 |
Fairness Analysis of Wireless Beamforming SchedulersBartolomé Calvo, Diego 12 January 2005 (has links)
Aquesta tesi es dedica a l'anàlisi de la justícia a la capa física en entorns de comunicacions amb múltiples antenes i diversos usuaris, cosa que implica un nou punt de vista sobre problemas tradicionals. Malgrat això, el grau d'equitat o desigualtat en la distribució de recursos ha estat estudiat en profunditat en altres camps com Economia o Ciències Socials. En el fons, el enginyers tendeixen a optimizar les prestacions globals, però quan hi ha múltiples usuaris en escena, aquella optimització no és necessàriament la millor opció. En sistemes mòbils, per exemple, l'usuari amb unes males condicions de canal pot patir les conseqüències d'un controlador central que basi les seves decisions en la millor qualitat instantània del canal. En aquest sentit, el problema s'encara des de quatre perspectives diferents: processament d'antenes, assignació de potència, assignació de bits, i combinació de diversitat en espai (SDMA) amb múltiples subportadores (OFDM).Abans del contingut tècnic, es descriu en detall l'entorn on s'emmarca aquesta tesi. La contribució de l'autor com a tal comença amb l'anàlisi de la justícia no només pel processament al transmissor, però també pel límit superior que representa la tècnica cooperativa entre el transmissor i el receptor. L'anàlisi de SNR pel forçador de zeros, el dirty paper i l'estratègia cooperativa entre transmissor i receptor està basada en la teoria de carteres, i consisteix bàsicament a calcular la mitja i la variància de cada esquema. Es veu que una mitja superior ve donada per una major variància en l'assignació de recursos. Així com a aquestes tècniques d'antenes, la justícia hi és implícita, es fa totalment explícita en la tria d'una tècnica de distribució de potència amb un conformador forçador de zeros. Llavors, les funcions objectiu tradicionals a la literatura es comparen en termes de justícia, això és en termes del màxim i el mínim, a més de la mitja o la suma. Aquí es pot veure que optimitzar les prestacions globals d'una cel·la (p.ex tècniques de mínima suma de BER o màxima suma de rate) implica una distribució més desigual dels recursos entre els usuaris. Per una altra banda, les tècniques max-min tendeixen a fer una distribució dels recursos més paritària entre els usuaris, alhora que perden en prestacions globals.A més, l'assignació de potència basada en teoria de jocs es compara a les tècniques tradicionals, i es mostra que la funció d'utilitat àmpliament utilitzada en aquest context té una taxa d'error inacceptable. Llavors, la funció a optimitzar s'ha de triar de forma acurada, per tal d'evitar possibles conseqüències indesitjables. Un altre problema interessant és el control d'admissió, és a dir, la selecció d'un subconjunt d'usuaris que han de ser servits simultàniament. Normalment, el control d'admissió és necessari per complir els requeriments de les comunicacions, en termes de retard o taxa d'error, entre d'altres. Es proposa un nou algoritme que està entre mig de les tècniques tradicionals a l'eix de la justícia, l'assignació uniforme de potència i l'esquema que dóna igual rate i BER a tots els usuaris.Després d'això, l'anàlisi de la justícia es fa per l'assignació de bits. Primer, el punt de vista tradicional de la maximització de la suma de rates es contraposa a la maximització de la mínima rate, que finalment assigna a tots el usuaris un número igual de bits. Un altre cop, el controlador central ha de balancejar les necessitats individuals amb les prestacions globals. Malgrat això, es proposa un algoritme que té un comportament intermig entre els esquemes tradicionals. A més, s'estudien una extensió per tal de combinar la diversitat en espai amb la freqüencial, per tant, s'analitzen sistemes SDMA/OFDM, pels quals s'extenen els algoritmes inicialment dissenyats per SDMA. Com que les funcions objectiu són NP-completes i molt difícils de resoldre fins i tot amb un nombre moderat d'usuaris i antenes, les solucions subòptimes són clarament bones candidates. A més, temes pràctics com la senyalització i la reducció en complexitat són tractats des d'un clar punt de vista d'enginyeria. / This dissertation is devoted to the analysis of fairness at the physical layer in multi-antenna multi-user communications, which implies a new view on traditional techniques. However, the degree of equality/inequality of any resource distribution has been extensively studied in other fields such as Economics or Social Sciences. Indeed, engineers usually aim at optimizing the total performance, but when multiple users come into play, the overall optimization might not necessarily be the best thing to do. For instance in wireless systems, the user with a bad channel condition might suffer the consequences from the selective choice based on the instantaneous channel quality made by a centralized entity. In this sense, the problem has four different perspectives: antenna processing, power allocation, bit allocation, and combination of space diversity (SDMA) with multiple subcarriers (OFDM).Before the technical content, the landscape where this dissertation is contained is described in detail. The contribution of the author starts with the analysis of fairness conducted not only for transmit processing, but also for the upper bound that represents the cooperative strategy between the transmitter and the receiver. The SNR analysis for zero forcing, dirty paper, and the cooperative scheme, is based on portfolio theory, and basically consists of the computation of the mean and the variance of each scheme. Interestingly, a higher mean performance comes at the expense of a higher variance in the resource allocation. Whereas in these antenna array techniques, the fairness is implicit, it is made explicit afterwards by the selection of a power allocation technique with a zero forcing beamforming. The traditional objective functions available in the literature are here compared in terms of fairness, i.e. not only the mean or sum value are analyzed, but also the minimum and the maximum. It can be stated that optimizing the global performance of a cell (e.g. a minimum sum BER or maximum sum rate techniques) comes at the expense of an uneven distribution of the resources among the users. On the other hand, max-min techniques tend to distribute the resources more equally at the expense of loosing in global performance.Moreover, the game-theoretic power allocation is compared to traditional techniques, and it is shown that the widespread utility function in this context yields an unacceptable BER. Therefore, the optimizing criterion shall be carefully chosen to avoid undesirable operating consequences. Another interesting problem is the admission control, that is, the selection of a subset of users that are scheduled for transmission. Usually, this selection shall be done because the QoS requirements of the communications, e.g. in terms of delay or error rate, prevent all the users from being served. A new algorithm is proposed that balances between the traditional techniques on the extremes of the fairness axis, the uniform power allocation and the equal rate and BER scheme.After that, the fairness analysis is conducted for the integer bit allocation. First, the traditional approach of the maximization of the sum rate is opposed to the maximization of the minimum rate technique, which ultimately assigns an equal number of bits for all the users. Again, the centralized controller shall balance between the global performance and the individual needs. Nevertheless, an algorithm is proposed, which yields an intermediate behavior among the other traditional schemes. Then, an extension is developed in order to combine the spatial diversity with frequency diversity, that is, SDMA/OFDM systems are analyzed and the initial algorithms for SDMA are extended for such a case. Since the objective functions are NP-complete and very hard to solve even with moderate number of users and antennas, several suboptimal solutions are motivated. Moreover, practical issues such as signaling or a reduction in complexity are faced from a clear engineering point of view.
|
274 |
Low Complexity and Low Power Bit-Serial Multipliers / Bitseriella multiplikatorer med låg komplexitet och låg effektförbrukningJohansson, Kenny January 2003 (has links)
Bit-serial multiplication with a fixed coefficient is commonly used in integrated circuits, such as digital filters and FFTs. These multiplications can be implemented using basic components such as adders, subtractors and D flip-flops. Multiplication with the same coefficient can be implemented in many ways, using different structures. Other studies in this area have focused on how to minimize the number of adders/subtractors, and often assumed that the cost for D flip-flops is neglectable. That simplification has been proved to be far too great, and further not at all necessary. In digital devices low power consumption is always desirable. How to attain this in bit-serial multipliers is a complex problem. The aim of this thesis was to find a strategy on how to implement bit-serial multipliers with as low cost as possible. An important step was achieved by deriving formulas that can be used to calculate the carry switch probability in the adders/subtractors. It has also been established that it is possible to design a power model that can be applied to all possible structures of bit- serial multipliers.
|
275 |
Transmitter Strategies for Closed-Loop MIMO-OFDMSung, Joon Hyun 09 July 2004 (has links)
This thesis concerns communication across channels with multiple inputs and multiple outputs. Specifically, we consider the closed-loop scenario in which knowledge of the state of the multiple-input multiple-output (MIMO) channel is available at the transmitter. We show how this knowledge can be exploited to optimize performance, as measured by the zero-outage capacity, which is the capacity corresponding to zero outage probability. On at-fading channels, a closed-loop transmitter allocates different powers and rates to the multiple channel inputs so as to maximize zero-outage capacity. Frequency-selective fading channels call for a combination of orthogonal-frequency-division multiplexing (OFDM) and MIMO known as MIMO-OFDM. This exacerbates the allocation problem because it multiplies the number of allocation dimensions by the number of OFDM tones. Fortunately, this thesis demonstrates that simple allocations are sufficient to approach the zero-outage capacity. These simple strategies exploit the tendency for random MIMO channels to behave deterministically as the number of inputs becomes large.
|
276 |
New Capacity-Approaching Codes for Run-Length-Limited ChannelsSankarasubramaniam, Yogesh 31 March 2006 (has links)
Run-Length-Limited (RLL) channels are found in digital recording systems like the Hard Disk Drive (HDD), Compact Disc (CD), and Digital Versatile Disc (DVD). This thesis presents novel encoding algorithms for RLL channels based on a simple technique called bit stuffing. First, two new capacity-achieving variable-rate code constructions are proposed for (d,k) constraints. The variable-rate encoding ideas are then extended to (0,G/I) and other RLL constraints. Since variable-rate codes are of limited practical value, the second half of this thesis focuses on fixed-rate codes. The fixed-rate bit stuff (FRB) algorithm is proposed for the design of simple, high-rate (0,k) codes. The key to achieving high encoding rates with the FRB algorithm lies in a novel, iterative pre-processing of the fixed-length input sequence prior to bit stuffing. Detailed rate analysis for the proposed FRB algorithm is presented, and upper and lower bounds on the asymptotic (in input block length) encoding rate are derived. Several system-level issues of the proposed FRB codes are addressed, and FRB code parameters required to design rate 100/101 and rate 200/201 (0,k) codes are tabulated. Finally, the proposed fixed-rate encoding is extended to (0,G/I) constraints.
|
277 |
A Novel Precoding Scheme for Systems Using Data-Dependent Superimposed TrainingChen, Yu-chih 31 July 2012 (has links)
For channel estimation without data-induced interference in data-dependent superimposed training (DDST) scheme, the data sequence is shifted by subtracting a data-dependent sequence before added to training sequence at transmitter. The distorted term causes the data identification problem (DIP) at the receiver. In this thesis, we propose two precoding schemes based on previous work. To maintain low peak-to-average power ratio (PAPR), the precoding matrix is restricted to a diagonal matrix. The first scheme is proposed to enlarge the minimum distance between the closest codewords, termed as efficient diagonal scheme. Conditions to make sure the precoding matrix is efficient for M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) modulation are listed in this paper. The second scheme pursues a lowest complexity at receiver which means the amount of searching set is reduced. It is a trade-off between the better bit error rate (BER) performance and a lower complexity at
receiver. The simulation results show that PAPR have been improved and the DIP is solved in both schemes.
|
278 |
Design methodologies for built-in testing of integrated RF transceivers with the on-chip loopback techniqueOnabajo, Marvin Olufemi 15 May 2009 (has links)
Advances toward increased integration and complexity of radio frequency (RF) andmixed-signal integrated circuits reduce the effectiveness of contemporary testmethodologies and result in a rising cost of testing. The focus in this research is on thecircuit-level implementation of alternative test strategies for integrated wirelesstransceivers with the aim to lower test cost by eliminating the need for expensive RFequipment during production testing.The first circuit proposed in this thesis closes the signal path between the transmitterand receiver sections of integrated transceivers in test mode for bit error rate analysis atlow frequencies. Furthermore, the output power of this on-chip loopback block wasmade variable with the goal to allow gain and 1-dB compression point determination forthe RF front-end circuits with on-chip power detectors. The loopback block is intendedfor transceivers operating in the 1.9-2.4GHz range and it can compensate for transmitterreceiveroffset frequency differences from 40MHz to 200MHz. The measuredattenuation range of the 0.052mm2 loopback circuit in 0.13µm CMOS technology was 26-41dB with continuous control, but post-layout simulation results indicate that theattenuation range can be reduced to 11-27dB via optimizations.Another circuit presented in this thesis is a current generator for built-in testing ofimpedance-matched RF front-end circuits with current injection. Since this circuit hashigh output impedance (>1k up to 2.4GHz), it does not influence the input matchingnetwork of the low-noise amplifier (LNA) under test. A major advantage of the currentinjection method over the typical voltage-mode approach is that the built-in test canexpose fabrication defects in components of the matching network in addition to on-chipdevices. The current generator was employed together with two power detectors in arealization of a built-in test for a LNA with 14% layout area overhead in 0.13µm CMOStechnology (<1.5% for the 0.002mm2 current generator). The post-layout simulationresults showed that the LNA gain (S21) estimation with the external matching networkwas within 3.5% of the actual gain in the presence of process-voltage-temperaturevariations and power detector imprecision.
|
279 |
Implementation Of Database Security Features Using Bit MatricesGopal, K 04 1900 (has links)
Information security is of utmost concern in a multiuser environment. The importance of security is felt much more with the widespread use of distributed database. Information is by itself a critical resource of an enterprise and thus the successful operation of an enterprise demands that data be made accessible only by authorized users and that the data be made to reflect the state of the enterprise.
Since many databases are online, accessed by multiple users concurrently, special mechanisms are needed to insure integrity and security of relevant information, This thesis describes a model for computer database security that supports a wide variety of security policies.
The terms security policies and security mechanism are presented in Chapter I. The interrelated topics of security and integrity are discussed in some detail. The importance and means of insuring security of information is also presented in this chapter.
In Chapter 2, the work done In the field of Computer Security and related topic has been presented. In general computer security models could be classified broadly under the two categories.
(1) Models based on Access Control Matrix and
(2) Models based on Information Flow Control.
The development of the models baaed on the above two schemes as also the policies supported by some of the schemes are presented in this chapter.
A brief description of the work carried out in database security as aim the definition of related terns are given in Chapter 3. The interrelationship between the operating system security and database security is also presented in this chapter. In general the database security mechanism depends on the existing operating system. The database security mechanism are thus only as strong as the underlying operating system on which it is developed. The various schemes used for implementing database security such as access controller and capability lists are described in this chapter.
In Chapter 4, a model for database security has been described. The model provides for:
(a) Delegation of access rights by a user and
(b) Revocation of access rights previously granted by a user.
In addition, algorithms for enforcing context dependent and content dependent rules are provided in this cheer. The context-dependent rules are stored in the form of elements of a bit matrix. Context-dependent rules could then be enforced by suitably manipulating the bit matrix and interpreting the value of me elements of the matrix, The major advantage of representing the rules using bit matrices is that the matrix itself could be maintalnet3 in main memory. The time taken to examine if a user is authorized to access an object is drastically reduced because of the reduced time required to inspect main memory. The method presented in this chapter, in addition to reducing the time requirement for enforcing security also presents a method for enforcing decentralized authorization control, a facility that is useful in a distributed database environment.
Chapter 5 describes a simulation method that is useful for comparing the various security schemes. The tasks involved in the simulation are –
1. Creation of an arrival (job).
2. Placing the incoming job either in the wait queue or in the run state depending on the type of access needed for: the object.
3. Checking that the user on whose behalf the job is being executed is authorized to access the object in the mode requested.
4. Checking for the successful completion of the job and termination of the job.
5. Collection of important parameters such as number of jobs processed, average connect time.
Simulation was carried out for timing both the access controller scheme and bit matrix scheme, The results of the simulation run bear the fact that the bit matrix scheme provides a faster method Six types of access were assumed to be permissible, three of the access types requiring shared lock and the rest requiring exclusive locks on the objects concerned, In addition the only type of operation allowed was assumed to be for accessing the objects.
It is be noted that the time taken to check for security violation is but one of the factors for rating the security system. In general, various other factors such as cost of implementing the security system, the flexibility that offers enforcing security policies also have to be taken into account while comparing the security systems.
Finally, in Chapter 6, a comparison of the security schemes are made. In conclusion the bit matrix approach is seen to provide the following features.
(a) The time required to check if an access request should be honoured is very small.
(b) The time required to find a11 users accessing an object viz, accountability is quite small.
(c) The time required to find all objects accessible by a user is also quite small.
(dl The scheme supports both decentralized and centralized authorization control.
(e) Mechanism for enforcing delegation of access rights and revocation of access rights could be built in easily.
( f ) The scheme supports content-dependent, context-dependent controls and also provides a means for enforcing history-dependent control.
Finally, some recommendations for further study in the field of Computer Database Security are presented.
|
280 |
Performance Evaluation of Low Density Parity Check Forward Error Correction in an Aeronautical Flight EnvironmentTemple, Kip 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / In some flight test scenarios the telemetry link is noise limited at long slant ranges or during signal fade events caused by antenna pattern nulls. In these situations, a mitigation technique such as forward error correction (FEC) can add several decibels to the link margin. The particular FEC code discussed in this paper is a variant of a low-density parity check (LDPC) code and is coupled with SOQPSK modulation in the hardware tested. This paper will briefly cover lab testing of the flight-ready hardware then present flight test results comparing a baseline uncoded telemetry link with a LDPC-coded telemetry link. This is the first known test dedicated to this specific FEC code in a real-world test environment with flight profile tailored to assess the viability of an LDPC-coded telemetry link.
|
Page generated in 0.032 seconds