Spelling suggestions: "subject:"communmunication engineering"" "subject:"communmunication ingineering""
61 |
Errors In Delay Differentiation In Statistical MultiplexingMallesh, K 05 1900 (has links)
Different applications of communication networks have different requirements that depend on the type of application. We consider the problem of differentiating between delay-sensitive applications based on their average delay requirements, as may be of interest in signalling networks. We consider packets of different classes that are to be transmitted on the same link with different average delay requirements, to reside in separate queues with the arrival statistics for the queues being specified. This statistical multiplexer has to schedule packets from different queues in so that the average delays of the queues approach the specified target delays as quickly as possible.
For simplicity, we initially consider a discrete-time model with two queues and a single work-conserving server, with independent Bernoulli packet arrivals and unit packet service times. With arrival rates specified, achieving mean queue lengths in a ratio which corresponds to the ratio of target mean delays is a means of achieving individual target mean delays. We formulate the problem in the framework of Markov decision theory. We study two scheduling policies called Queue Length Balancing and Delay Balancing respectively, and show through numerical computation that the expectation of magnitude of relative error in θ (1/m) and θ (1/√m) respectively, and that the expectation of the magnitude of relative error in weighted average delays decays as θ (1/√m) and θ (1/m) respectively, where m is the averaging interval length.
We then consider the model for an arbitrary number of queues each with i.i.d. batch arrivals, and analyse the errors in the average delays of individual queues. We assume that the fifth moment of busy period is finite for this model. We show that the expectation of the absolute value of error in average queue length for at least one of the queues decays at least as slowly as θ (1/√m), and that the mean squared error in queue length for at least one of the queues decays at least as slowly as θ (1/m). We show that the expectation of the absolute value of error in approximating Little’s law for finite horizon is 0 (1/m). Hence, we show that the mean squared error in delay for at least one of the queues decays at least slowly as θ (1/m). We also show that if the variance of error in delay decays for each queue, then the expectation of the absolute value of error in delay for at least one of the queues decays at least as slowly as θ (1/√m).
|
62 |
Interference Management For Vector Gaussian Multiple Access ChannelsPadakandla, Arun 03 1900 (has links)
In this thesis, we consider a vector Gaussian multiple access channel (MAC) with users demanding reliable communication at specific (Shannon-theoretic) rates. The objective is to assign vectors and powers to these users such that their rate requirements are met and the sum of powers received is minimum.
We identify this power minimization problem as an instance of a separable convex optimization problem with linear ascending constraints. Under an ordering condition on the slopes of the functions at the origin, an algorithm that determines the optimum point in a finite number of steps is described. This provides a complete characterization of the minimum sum power for the vector Gaussian multiple access channel. Furthermore, we prove a strong duality between the above sum power minimization problem and the problem of sum rate maximization under power constraints.
We then propose finite step algorithms to explicitly identify an assignment of vectors and powers that solve the above power minimization and sum rate maximization problems. The distinguishing feature of the proposed algorithms is the size of the output vector sets. In particular, we prove an upper bound on the size of the vector sets that is independent of the number of users.
Finally, we restrict vectors to an orthonormal set. The goal is to identify an assignment of vectors (from an orthonormal set) to users such that the user rate requirements is met with minimum sum power. This is a combinatorial optimization problem. We study the complexity of the decision version of this problem. Our results indicate that when the dimensionality of the vector set is part of the input, the decision version is NP-complete.
|
63 |
A Novel Higher Order Noise Shaping Sigma-Delta ModulatorBehera, Khitish Chandra 01 March 2008 (has links)
The thesis focuses on a higher order noise-shaping Δ ADC architecture which employs filtered quantization error as a dither signal. Furthermore, the work studies implementation challenges using Switched-Capacitor and Switched-Current techniques.
Digitization in an IF conversion receiver can be accomplished either with a wide band Nyquist rate ADC or a BandPass Δ ADC. The use of the latter is the optimum solution since the bandwidth of the IF signals is typically much smaller than the carrier frequency and reducing the quantization noise in the entire nyquist band becomes superfluous. Instead by using BandPass Δ ADCs the quantization noise power is reduced only in a narrow band around IF location. We study state-of-the-art high dynamic range Δ data converter topologies suited for wide-band radio receivers. We propose a topology which achieves higher order noise shaping by employing filtered quantization error as a dither signal.
We study implementation challenges for Δ converters in digital technology. Traditionally, Δ ADCs used Switched-Capacitor (SC) circuits to realize their building blocks. This analog sample-data technique is based on the idea that a periodically switched capacitor can emulate a resistor. The limiting factor that degrades the performance of SC circuits implemented in standard VLSI technologies is the continuous reduction of supply voltages, prompted by the continuous scaling-down process. This fact, which is advantageous for digital circuitry, makes the design of SC circuits difficult, which are forced to use clock boosting strategies for switches and to increase the power consumption in order to obtain high-speed and high dynamic range opamps with low voltage operation. In this scenario, the use of current-domain sampled data technique, also named Switched-Current (SI), instead of voltages results advantageous for several reasons. As the signal carriers are currents, the supply voltage does not limit the signal range as much as in SC circuits. Therefore, SI circuits are more suitable than SC for low-voltage operation. This work studies the feasibility and bottlenecks of implementing Δ modulator building blocks using SI technique. A BandPass filter, DAC and 1-bit quantizer have been designed in 0.18µm technology using SI technique. (For mathematical equations pl refer the pdf file)
|
64 |
Recovery From DoS Attacks In MIPv6 : Modelling And ValidationKumar, Manish C 03 1900 (has links)
Denial-of-Service (DoS) attacks form a very important category of security threats that are possible in MIPv6 (Mobile Internet Protocol version 6). This thesis proposes a scheme for participants (Mobile Node, Home Agent, and Correspondent Node) in MIPv6 to recover from DoS attacks in the event of any of them being subjected to a DoS attack. We propose a threshold based scheme for participants in MIPv6 to detect presence of DoS attacks and to recover from DoS attacks in the event of any of them being subjected to a DoS attack. This is achieved using an infrastructure for MIPv6 that makes such a solution practical even in the absence of IPsec infrastructure. We propose a protocol that uses concepts like Cryptographically Generated Addresses (CGA), short-term IP addresses using a Lamport hash like mechanism and a hierarchy based trust management infrastructure for key distribution.
However, reasoning about correctness of such protocols is not trivial. In addition, new solutions to mitigate attacks may need to be deployed in the network on a frequent basis as and when attacks are detected, as it is practically impossible to anticipate all attacks and provide solutions in advance. This makes it necessary to validate solutions in a timely manner before deployment in real network. However, threshold schemes needed in group protocols make analysis complex. Model checking threshold-based group protocols that employ cryptography have been not successful so far. The testing in a real network or a test bed also will not be feasible if faster and frequent deployment of DoS mitigation solutions is needed. Hence, there is a need for an approach that lies between automated/manual verification and an actual implementation.
It is evident from existing literature that not many simulations for doing security analysis of MIP/MIPv6 have been done. This research is a step in that direction. We propose a simulation based approach for validation using a tool called FRAMOGR [40] that supports executable specification of group protocols that use cryptography. FRAMOGR allows one to specify attackers and track probability distributions of values or paths. This work deals with simulation of DoS attacks and their mitigation solutions for MIP in FRAMOGR. This makes validation of solutions possible without mandating a complete deployment of the protocol to detect vulnerabilities in a solution. This does away with the need for a formal theoretical verification of a DoS mitigation solution. In the course of this work, some DoS attacks and recovery mechanisms are simulated and validated using FRAMOGR. We obtained encouraging results for the performance of the detection scheme. We believe that infrastructure such as FRAMOGR would be required in future for validating new group based threshold protocols that are needed for making MIPv6 more robust.
|
65 |
Diversity-Multiplexing Gain Tradeoff Of Cooperative Multi-hop NetworksBirenjith, P S 07 1900 (has links)
We consider single-source single-sink (ss-ss) multi-hop relay networks, with slow-fading links and single-antenna half-duplex relay nodes. While two-hop cooperative relay networks have been studied in great detail in terms of the diversity-multiplexing tradeoff (DMT), few results are available for more general networks. In this paper, we identify two families of networks that are multi-hop generalizations of the two-hop network: K-Parallel-Path (KPP) networks and layered networks.
KPP networks can be viewed as the union of K node-disjoint parallel relaying paths, each of length greater than one. KPP networks are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the DMT of these families of networks completely for K > 3. Layered networks are networks comprising of layers of relays with edges existing only between adjacent layers, with more than one relay in each layer. We prove that a linear DMT between the maximum diversity dmax and the maximum multiplexing gain of 1 is achievable for single-antenna fully-connected layered networks. This is shown to be equal to the optimal DMT if the number of relaying layers is less than 4. For multiple-antenna KPP and layered networks, we provide an achievable DMT, which is significantly better than known lower bounds for half duplex networks.
For arbitrary multi-terminal wireless networks with multiple source-sink pairs, the maximum achievable diversity is shown to be equal to the min-cut between the corresponding source and the sink, irrespective of whether the network has half-duplex or full-duplex relays. For arbitrary ss-ss single-antenna directed acyclic networks with full-duplex relays, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable.
Along the way, we derive the optimal DMT of a generalized parallel channel and derive lower bounds for the DMT of triangular channel matrices, which are useful in DMT computation of various protocols. All protocols in this paper are explicit and use only amplify-and-forward (AF) relaying. We also construct codes with short block-lengths based on cyclic division algebras that achieve the optimal DMT for all the proposed schemes.
Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of cooperative networks and that simple AF protocols are often sufficient to attain the optimal DMT.
|
66 |
Equalization Algorithms And Performance Analysis In Cyclic-Prefixed Single Carrier And Multicarrier Wireless SystemsItankar, Yogendra Umesh 01 1900 (has links) (PDF)
The work reported in this thesis is divided in to two parts.
In the first part, we report a closed-form bit error rate (BER) performance analysis of orthogonal frequency division multiple access (OFDMA) on the uplink in the presence of carrier frequency offsets (CFOs) and/or timing offsets (TOs) of other users with respect to a desired user. We derive BER expressions using probability density function (pdf) and characteristic function approaches, for a Rician faded multi-cluster multi-path channel model that is typical of indoor ultrawideband channels and underwater acoustic channels. Numerical and simulation results show that the BER expressions derived accurately quantify the performance degradation due to non-zero CFOs and TOs.
Ultrawideband channels in indoor/industrial environment and underwater acoustic channels are severely delay-spread channels, where the number of multipath components can be of the order of tens to hundreds. In the second part of the thesis, we report low complexity equalization algorithms for cyclic-prefixed single carrier (CPSC)systems that operate on such inter-symbol interference(ISI) channels characterized by large delay spreads. Both single-input single-output and multiple-input multiple-output(MIMO) systems are considered. For these systems, we propose a low complexity graph based equalization carried out in frequency domain. Because of the noise whitening effect that happens for large frame sizes and delay spreads in the frequency domain processing, improved performance compared to time domain processing is shown to be achieved. Since the graph based equalizer is a soft-input soft-output equalizer, iterative techniques(turbo-equalization) between detection and decoding are shown to yield good coded BER performance at low complexities in convolutional and LDPC coded systems. We also study joint decoding of LDPC code and equalization of MIMO-ISI channels using a joint factor graph.
|
67 |
Design, Fabrication and Characterization of Low Voltage Capacitive RF MEMS SwitchesShekhar, Sudhanshu January 2015 (has links) (PDF)
This dissertation presents the design, fabrication, and characterization of low-voltage capacitive RF MEMS switches. Although, RF MEMS switches have shown superior performance as compared to the existing solid-state semiconductor switches and are viable alternate to the present and the future communication systems, not been able to match the commercial standards due to their poor reliability. Dielectric charging due high actuation is one of the major concerns that limit the reliability of these switches. Hence, the focus of this thesis is on the development of low actuation voltage RF MEMS switches without compromising much on their RF and dynamic performances i.e., low insertion loss and high isolation. Four different switch topologies are studied and discussed. Electromechanical and electromagnetic modelling is presented to study the effect of various components that comprise a MEMS switch on the transient and the RF behaviour. The analytical expressions for switching and release times are established in order to estimate the switching and release times.
An in-house developed surface micromachining process is adapted for the micro fabrication. This process eliminates the need for an extra mask used for the anchors and restricts the overall process to four-masks only. These switches are fabricated on 500 µm thick glass substrate. A 0.5 µm thick gold film is used as the structural material. For the final release of the switch, chemical wet etching technique is employed.
The fabricated MEMS switches are characterized mechanically and electrically by measuring mechanical resonant frequency, quality factor, pull-in, and pull-up voltages. Since, low actuation voltage switches have slow response time. One of the key objectives of this thesis is to realize switches with fast response time at low actuation voltage. Measurements are performed to estimate the switching and release times. The measured Q-factors of switches are found to be in between 1.1 -1.4 which is the recommended value for Q in MEMS switches for a suppressed oscillation after the release. Furthermore, the effect of hole size on the switching dynamics is addressed. RF measurements are carried out to measure the S-parameters in order to quantify the RF performance.
The measured results demonstrate that these switches need low actuation voltage in range of 4.5 V to 8.5 V for the actuation. The measured insertion loss less than -0.8 dB and isolation better than 30 dB up to 40 GHz is reported.
In addition, the robustness of realized switches is tested using in-house developed Lab View-based automated measurement test set-up. The reliability test analysis shows no degradation in the RF performance even after 10 millions of switching cycles. Overall yield of 70 -80% is estimated in the present work. Finally, the experimentally measured results presented in this work prove the successful development of low actuation voltage capacitive RF MEMS switches and also offers that even with 0.5 µm thick gold film better reliability for MEMS switches can be achieved.
|
68 |
Optimal Mechanisms for Selling Two Heterogeneous ItemsThirumulanathan, D January 2017 (has links) (PDF)
We consider the problem of designing revenue-optimal mechanisms for selling two heterogeneous items to a single buyer. Designing a revenue-optimal mechanism for selling a single item is simple: Set a threshold price based on the distribution, and sell the item only when the buyer’s valuation exceeds the threshold. However, designing a revenue-optimal mechanism to sell two heterogeneous items is a harder problem. Even the simplest setting with two items and one buyer remains unsolved as yet. The partial characterizations available in the literature have succeeded in solving the problem largely for distributions that are bordered by the coordinate axes. We consider distributions that do not contain (0; 0) in their support sets. Specifically, we consider the buyer’s valuations to be distributed uniformly over arbitrary rectangles in the positive quadrant. We anticipate that the special cases we solve could be a guideline to un-derstand the methods to solve the general problem. We explore two different methods – the duality method and the virtual valuation method – and apply them to solve the problem for distributions that are not bordered by the coordinate axes. The thesis consists of two parts.
In the first part, we consider the problem when the buyer has no demand constraints. We assume the buyer’s valuations to be uniformly distributed over an arbitrary rectangle [c1; c1 + b1] [c2; c2 + b2] in the positive quadrant. We first study the duality approach that solves the problem for the (c1; c2) = (0; 0) case. We then nontrivially extend this approach to provide an explicit solution for arbitrary nonnegative values of (c1; c2; b1; b2). We prove that the optimal mechanism is to sell the two items according to one of eight simple menus. The menus indicate that the items must be sold individually for certain values of (c1; c2), the items must be bundled for certain other values, and the auction is an interplay of individual sale and a bundled sale for the remaining values of (c1; c2). We conjecture that our method can be extended to a wider class of distributions. We provide some preliminary results to support the conjecture.
In the second part, we consider the problem when the buyer has a unit-demand constraint. We assume the buyer’s valuations (z1; z2) to be uniformly distributed over an arbitrary rectangle [c; c + b1] [c; c + b2] in the positive quadrant, having its south-west corner on the line z1 = z2. We first show that the structure of the dual measure shows significant variations for different values of (c; b1; b2) which makes it hard to discover the correct dual measure, and hence to compute the solution. We then nontrivially extend the virtual valuation method to provide a complete, explicit solution for the problem considered. In particular, we prove that the optimal mechanism is structured into five simple menus. We then conjecture, with promising preliminary results, that the optimal mechanism when the valuations are uniformly distributed in an arbitrary rectangle [c1; c1 + b1] [c2; c2 + b2] is also structured according to similar menus.
|
69 |
Secret Key Generation in the Multiterminal Source Model : Communication and Other AspectsMukherjee, Manuj January 2017 (has links) (PDF)
This dissertation is primarily concerned with the communication required to achieve secret key (SK) capacity in a multiterminal source model. The multiterminal source model introduced by Csiszár and Narayan consists of a group of remotely located terminals with access to correlated sources and a noiseless public channel. The terminals wish to secure their communication by agreeing upon a group secret key. The key agreement protocol involves communicating over the public channel, and agreeing upon an SK secured from eavesdroppers listening to the public communication. The SK capacity, i.e., the maximum rate of an SK that can be agreed upon by the terminals, has been characterized by Csiszár and Narayan. Their capacity-achieving key generation protocol involved terminals communicating to attain omniscience, i.e., every terminal gets to recover the sources of the other terminals. While this is a very general protocol, it often requires larger rates of public communication than is necessary to achieve SK capacity.
The primary focus of this dissertation is to characterize the communication complexity, i.e., the minimum rate of public discussion needed to achieve SK capacity. A lower bound to communication complexity is derived for a general multiterminal source, although it turns out to be loose in general. While the minimum rate of communication for omniscience is always an upper bound to the communication complexity, we derive tighter upper bounds to communication complexity for a special class of multiterminal sources, namely, the hypergraphical sources. This upper bound yield a complete characterization of hypergraphical sources where communication for omniscience is a rate-optimal protocol for SK generation, i.e., the communication complexity equals the minimum rate of communication for omniscience.
Another aspect of the public communication touched upon by this dissertation is the necessity of omnivocality, i.e., all terminals communicating, to achieve the SK capacity. It is well known that in two-terminal sources, only one terminal communicating success to generate a maximum rate secret key. However, we are able to show that for three or more terminals, omnivocality is indeed required to achieve SK capacity if a certain condition is met. For the specific case of three terminals, we show that this condition is also necessary to ensure omnivocality is essential in generating a SK of maximal rate. However, this condition is no longer necessary when there are four or more terminals.
A certain notion of common information, namely, the Wyner common information, plays a central role in the communication complexity problem. This dissertation thus includes a study of multiparty versions of the two widely used notions of common information, namely, Wyner common information and Gács-Körner (GK) common information. While evaluating these quantities is difficult in general, we are able to derive explicit expressions for both types of common information in the case of hypergraphical sources.
We also study fault-tolerant SK capacity in this dissertation. The maximum rate of SK that can be generated even if an arbitrary subset of terminals drops out is called a fault-tolerant SK capacity. Now, suppose we have a fixed number of pairwise SKs. How should one distribute them amongpairs of terminals, to ensure good fault tolerance behavior in generating a groupSK? We show that the distribution of the pairwise keys according to a Harary graph provides a certain degree of fault tolerance, and bounds are obtained on its fault-tolerant SK capacity.
|
70 |
Towards a Unified Framework for Design of MEMS based VLSI SystemsSukumar, Jairam January 2016 (has links) (PDF)
Current day VLSI systems have started seeing increasing percentages of multiple energy domain components being integrated into the mainstream. Energy domains such as mechanical, optical, fluidic etc. have become all pervasive into VLSI systems and such systems are being manufactured routinely. The framework required to design such an integrated system with diverse energy domains needs to be evolved as a part of conventional VLSI design methodology. This is because manufacturing and design of these integrated energy domains although based on semiconductor processing, is still very ad-hoc, with each device requiring its dedicated design tools and process integration.
In this thesis three different approaches in different energy domains, have been pro-posed. These three domains include modelling & simulation, synthesis & compilation and formal verification. Three different scenarios have been considered and it is shown that these tasks can be co-performed along with conventional VLSI circuits and systems.
In the first approach a micro-mechanical beam bending case is presented. A thermal heat ow causing the beam to bend through thermal stress is analyzed for change in capacitance under a single analysis and modelling framework. This involves a seamless analysis through thermal, mechanical and electrical energy domains. The second part of the thesis explores synthesis and compilation paradigms. The concept of a Gyro-compiler analogous to a memory compiler is proposed, which primarily generates soft IP models for various gyro topologies.
The final part of this thesis deals in showcasing a working prototype of a formal verification framework for MEMS based hybrid systems. The MEMS verification domain today is largely limited to simulation based verification. Many techniques have been proposed for formal verification of hybrid systems. Some of these methods have been extended to demonstrate, how MEMS based hybrid systems can be formally verified through ex-tensions of conventional formal verification methods. An adaptive cruise control (ACC) system with a gyro based speed sensor has been analyzed and formally verified for various specifications of this system.
|
Page generated in 0.1582 seconds