Spelling suggestions: "subject:"electrical communmunication"" "subject:"electrical commoncommunication""
1 |
Electrical Communication Between Different Cell Types in the Colonic MusculatureLiu, Louis W.C. 09 1900 (has links)
The major cell types in the canine colon musculature are interstitial cells of Cajal (ICC), circular muscle (CM) cells and longitudinal muscle (LM) cells. In isolated muscle strip studies, spontaneous membrane potential oscillations (slow waves) are generated in the submucosal border of the circular muscle where a gap junctionally well-coupled network of ICC and CM is found. CM devoid of LM and submucosal pacemaker region (CM preparations) are spontaneously quiescent. The research undertaken was to understand the mechanism of slow wave propagation into the circular muscle and to investigate the consequences to the electrical activity in CM after coupling with different electrical activities from different cells types. Our results show that CM cells, although spontaneously quiescent because of high K+ conductance, are excitable and can actively participate in slow wave generation. The electrical oscillations induced in the CM preparations could easily be potentiated by an L-type Ca2+ channel activator, Bay K 8644, and abolished by a L-type Ca2+ antagonist, D600, suggesting involvement of the conductance in the induced activity. The induced oscillations are similar to the SLAPs in the longitudinal muscle which shows that it is not necessary to have a specialized pacemaker cells for generating SLAPs. Using a cross sectioned preparation with all intact muscle layers, we also showed that the heterogeneity in the electrical activity of CM, such as: the resting membrane potential gradient, depolarization of plateau potential in the myenteric border and "apparent" decay in slow wave amplitude, is due to electrical interactions between different intrinsic activities from different cell types. Morphological evidence was obtained for the possible communication pathways in the submucosal and the myenteric borders of the circular muscle. Different coupling mechanisms in different areas were hypothesized. In addition, the 3-dimensional aspects of the submucosal ICC network in the ca.nine colon were clarified. / Thesis / Master of Engineering (ME)
|
2 |
Dynamic Dealy Compensation and Synchronisation Services for Continuous Media StreamsShivaprasad, Mala A 10 1900 (has links)
Multimedia' nature of an application refers to the presence of several media streams in parallel. Whether it is receiving real-time data or retrieving stored data, there exists an end-to-end delay in data transfer from source to destination over the network. This delay experienced can be split into a fixed part and a variable part. Data processing time like coding and decoding at the source and destination are the fixed delays experienced. The variable delay occurs mainly due to queuing at the intermediate nodes during its flow through the network. The variable or unequal delays introduce gaps or discontinuities within a stream. In multi-stream applications where each stream may flow on different routes based on the bandwidth availability experiencing different delays, mismatch between them can also occur. These discontinuities and skews result in poor quality of playout. Clock drift and variations in drift rates between the source/s and destination/s, clock also lead to poor quality of play out. To eliminate these skews and discontinuities, there must be mechanisms, viz., and synchronisation services to convey, reintroduce and maintain the temporal relationship between the media streams for presentation throughout the playout, at the destination. The reintroduction of this lost temporal relationship within a stream and between various media streams for presentation at the destination is the object of multimedia synchronisation and is the subject matter of this thesis.
In the presence of synchronised clocks, the main cause of asynchronies between media streams is the difference in delays experienced and the jitter. In this work, to convey the temporal relationship between streams of an application to the playout site, each stream is assigned a priority л, based on its importance to the user. The media streams are then divided into synchronisation units called 'Groups' based on that stream's characteristics which has the highest priority л. A group may therefore consist of one video frame and other data which were generated in that interval. Or may consist of silence and talk-spurt periods of the voice stream with data units of other streams generated in the same interval.
Since the quality of playout of temporally related delay-sensitive streams depends upon the delay-experienced, the concept of QoS can be extended to describe the presentation requirements of uch data. Depending on the user perception and the delay experienced, an application can have a range of playout times, giving the best performance. The presentation of many real-time applications can be considered satisfactory even when the delay bound is exceeded by a small amount for a short period of time under varying network conditions. This property can be exploited by defining two sets of QoS parameters, namely QoS optimum and QoSlimit for each real-time application. As the delay and its variations increase, the optimum playout time range
decreases. QoS optimum specifies the performance parameters required to perceive 'realtime'. Multimedia data can be played out at its QoSlimit with a deterioration in quality under poor network conditions still maintaining the synchronisation between streams. To control the playout at two levels of QoS, and maintain intra-media and inter-media synchronisation, stream controllers and super stream controllers have been used.
The dynamic delay compensation algorithm and synchronisation services were simulated using network delay models and performances studied. It is shown that the proposed algorithm not only synchronised media streams and smoothened jitter but also optimised buffer space and buffer occupancy time while meeting the desired quality of service requirements
|
3 |
DSP Techniques for Performance Enhancement of Digital Hearing AidUdayashankara, V 12 1900 (has links)
Hearing impairment is the number one chronic disability affecting people in the world. Many people have great difficulty in understanding speech with background noise. This is especially true for a large number of elderly people and the sensorineural impaired persons. Several investigations on speech intelligibility have demonstrated that subjects with sensorineural loss may need a 5-15 dB higher signal-to-noise ratio than the normal hearing subjects. While most defects in transmission chain up to cochlea can nowadays be successfully rehabilitated by means of surgery, the great majority of the remaining inoperable cases are sensorineural hearing impaired, Recent statistics of the hearing impaired patients applying for a hearing aid reveal that 20% of the cases are due to conductive losses, more than 50% are due to sensorineural losses, and the rest 30% of the cases are of mixed origin. Presenting speech to the hearing impaired in an intelligible form remains a major challenge in hearing-aid research today. Even-though various methods have been suggested in the literature for the minimization of noise from the contaminated speech signals, they fail to give good SNR improvement and intelligibility improvement for moderate to-severe sensorineural loss subjects. So far, the power and capability of Newton's method, Nonlinear adaptive filtering methods and the feedback type artificial neural networks have not been exploited for this purpose. Hence we resort to the application of all these methods for improving SNR and intelligibility for the sensorineural loss subjects. Digital hearing aids frequently employ the concept of filter banks. One of the major drawbacks of this techniques is the complexity of computation requiring more number of multiplications. This increases the power consumption. Therefore this Thesis presents the new approach to speech enhancement for the hearing impaired and also the construction of filter bank in Digital hearing aid with minimum number of multiplications. The following are covered in this thesis.
One of the most important application of adaptive systems is in noise cancellation using adaptive filters. The ANC setup requires two input signals (viz., primary and reference). The primary input consists of the sum of the desired signal and noise which is uncorrelated. The reference input consists of mother noise which is correlated in Some unknown way with noise of primary input. The primary signal is obtained by placing the omnidirectional microphone just above one ear on the head of the KEMAR mannikan and the reference signal is obtained by placing the hypercardioid microphone at the center of the vertebral column on the back. Conventional speech enhancement techniques use linear schemes for enhancing speech signals. So far Nonlinear adaptive filtering techniques are not used in hearing aid applications. The motivation behind the use of nonlinear model is that it gives better noise suppression as compared to linear model. This is because the medium through which signals reach the microphone may be highly nonlinear. Hence the use of linear schemes, though motivated by computational simplicity and mathematical tractability, may be suboptimal. Hence, we propose the use of nonlinear models to enhance the speech signals for the hearing impaired: We propose both Linear LMS and Nonlinear second order Volterra LMS schemes to enhance speech signals. Studies conducted for different environmental noise including babble, cafeteria and low frequency noise show that the second-order Volterra LMS performs better compared to linear LMS algorithm. We use measures such as signal-to-noise ratio (SNR),
time plots, and intelligibility tests for performance comparison.
We also propose an ANC scheme which uses Newton's method to enhance speech signals. The main problem associated with LMS based ANC is that their convergence is slow and hence their performance becomes poor for hearing aid applications. The reason for choosing Newton's method is that they have high performance adaptive-filtering methods that often converge and track faster than LMS method. We propose two models to enhance speech signals: one is conventional linear model and the other is a nonlinear model using a second order Volterra function. Development of Newton's type algorithm for linear mdel results in familiar Recursive least square (RLS) algorithm. The performance of both linear and non-linear Newton's algorithm is evaluated for babble, cafeteria and frequency noise. SNR, timeplots and intelligibility tests are used for performance comparison. The results show that Newton's method using Volterra nonlinearity performs better than RLS method.
ln addition to the ANC based schemes, we also develop speech enhancement for the hearing impaired by using the feedback type neural network (FBNN). The main reason is that here we have parallel algorithm which can be implemented directly in hardware. We translate the speech enhancement problem into a neural network (NN) framework by forming an appropriate energy function. We propose both linear and nonlinear FBNN for enhancing the speech signals. Simulated studies on different environmental noise reveal that the FBNN using the Volterra nonlinearity is superior to linear FBNN in enhancing speech signals. We use SNR, time plots, and intelligibility tests for performance comparison.
The design of an effective hearing aid is a challenging problem for sensorineural hearing impaired people. For persons with sensorineural losses it is necessary that the frequency response should be optimally fitted into their residual auditory area. Digital filter enhances the performance of the hearing aids which are either difficult or impossible to realize using analog techniques. The major problem in digital hearing aid is that of reducing power consumption. Multiplication is one of the most power consuming operation in digital filtering. Hence a serious effort has been made to design filter bank with minimum number of multiplications, there by minimizing the power consumption. It is achieved by using Interpolated and complementary FIR filters. This method gives significant savings in the number of arithmetic operations.
The Thesis is concluded by summarizing the results of analysis, and suggesting scope for further investigation
|
4 |
Resource Allocation in Femtocells via Game TheorySankar, V Udaya January 2015 (has links) (PDF)
Most of the cellular tra c (voice and data) is generated indoors. Due to attenuation from walls, quality of service (QoS) of di erent applications degrades for indoor tra c. Thus in order to provide QoS for such users the Macro base station (MBS) has to transmit at high power. This increases recurring costs to the service provider and contributes to green house emissions. Hence, Femtocells (FC) are considered as an option. Femto Access Points (FAP) are low cost, low powered, small base stations deployed indoors by customers. A substantial part of indoor tra c is diverted from the Macrocell (MC) through the FAP. Since the FCs also use the same channels as the MC, deployment of FCs causes interference to not only its neighbouring FCs but also to the users in the MC. Thus, we need better interference management techniques for this system.
In this thesis, we consider a system with multiple Femtocells operating in a Macrocell. FCs and MC use same set of multiple channels and support multiple users. Each user may have a minimum rate requirement. To limit interference to the MC, there is a peak power constraint on each channel.
In the rst part of the thesis, we consider sparsely deployed FCs where the interference between the FCs is negligible. For this we formulate the problem of channel allocation and power control in each FC. We develop computationally e cient, suboptimal algorithms to satisfy QoS of each user in the FC. If QoS of each user is not satis ed, we provide solutions which are fair to all the users.
In the second part of the thesis, we consider the case of densely deployed FCs where we formulate the problem of channel allocation and power control in each Femtocell as a noncooperative Game. We develop e cient decentralized algorithms to obtain a Nash equilibrium (NE) at which QoS of each user is satis ed. We also obtain e cient decentralized algorithms to obtain fair NE when it may not be feasible to satisfy the QoS of all the users in the FC. Finally, we extend our algorithms to the case where there may be voice and data users in the system.
In the third part of the thesis, we continue to study the problem setup in the second part, where we develop algorithms which can simultaneously consider the cases where
QoS of users can be satis ed or not. We provide algorithms to compute Coarse Correlated Equilibrium (CCE), Pareto optimal points and Nash bargaining solutions.
In the nal part of the thesis, we consider interference limit at the MBS and model FCs as sel sh nodes. The MBS protects itself via pricing subchannels per usage. We obtain a Stackelberg equilibrium (SE) by considering MBS as a leader and FCs as followers.
|
5 |
Modelling And Analysis Of Event Message Flows In Distributed Discrete Event Simulators Of Queueing NetworksShorey, Rajeev 12 1900 (has links)
Distributed Discrete Event Simulation (DDES) has received much attention in recent years, owing to the fact that uniprocessor based serial simulations may require excessive amount of simulation time and computational resources. It is therefore natural to attempt to use multiple processors to exploit the inherent parallelism in discrete event simulations in order to speed up the simulation process.
In this dissertation we study the performance of distributed simulation of queueing networks, by analysing queueing models of message flows in distributed discrete event simulators. Most of the prior work in distributed discrete event simulation can be categorized as either empirical studies or analytic (or formal) models. In the empirical studies, specific experiments are run on both conservative and optimistic simulators to see which strategy results in a faster simulation. There has also been increasing activity in analytic models either to better understand a single strategy or to compare two strategies. Little attention seems to have been paid to the behaviour of the interprocessor message queues in distributed discrete event simulators.
To begin with, we study how to model distributed simulators of queueing networks. We view each logical process in a distributed simulation as comprising a message sequencer with associated message queues, followed by an event processor. A major contribution in this dissertation is the introduction of the maximum lookahead sequencing protocol. In maximum lookahead sequencing, the sequencer knows the time-stamp of the next message to arrive in the empty queue. Maximum lookahead is an unachievable algorithm, but is expected to yield the best throughput compared to any realisable sequencing technique. The analysis of maximum lookahead, therefore, should lead to fundamental limits on the performance of any sequencing algorithm
We show that, for feed forward type simulators, with standard stochastic assump-tions for message arrival and time-stamp processes, the message queues are unstable for conservative sequencing, and for conservative sequencing with maximum lookahead and hence for optimistic resequencing, and for any resequencing algorithm that does not employ interprocessor "flow control". It follows that the resequencing problem is fundamentally unstable and some form of interprocessor flow control is necessary in order to make the message queues stable (without message loss). We obtain some generalizations of the instability results to time-stamped message arrival processes with certain ergodicity properties.
For feedforward type distributed simulators, we study the throughput of the event sequencer without any interprocessor flow control. We then incorporate flow control and study the throughput of the event sequencer. We analyse various flow control mechanisms. For example, we can bound the buffers of the message queues, or various logical processes can be prevented from getting too far apart in virtual time by means of a mechanism like Moving Time Windows or Bounded Lag. While such mechanisms will serve to stabilize buffers, our approach, of modelling and analysing the message flow processes in the simulator, points towards certain fundamental limits of efficiency of distributed simulation, imposed by the synchronization mechanism.
Next we turn to the distributed simulation of more general queueing networks. We find an upper bound to the throughput of distributed simulators of open and closed queueing networks. The upper bound is derived by using flow balance relations in the queueing network and in the simulator, processing speed constraints, and synchronization constraints in the simulator. The upper bound is in terms of parameters of the queueing network, the simulator processor speeds, and the way the queueing network is partitioned or mapped over the simulator processors. We consider the problem of choosing a mapping that maximizes the upper bound. We then study good solutions o! this problem as possible heuristics for the problem of partitioning the queueing network over the simulator processors. We also derive a lower bound to the throughput of the distributed simulator for a simple queueing network with feedback.
We then study various properties of the maximum lookahead algorithm. We show that the maximum lookahead algorithm does not deadlock. Further, since there are no synchronization overheads, maximum lookahead is a simple algorithm to study. We prove that maximum lookahead sequencing (though unrealisable) yields the best throughput compared to any realisable sequencing technique. These properties make maximum lookahead a very useful algorithm in the study of distributed simulators of queueing networks.
To investigate the efficacy of the partitioning heuristic, we perform a study of queueing network simulators. Since it is important to study the benefits of distributed simulation, we characterise the speedup in distributed simulation, and find an upper bound to the speedup for a given mapping of the queues to the simulator processors. We simulate distributed simulation with maximum lookahead sequencing, with various mappings of the queues to the processors. We also present throughput results foT the same mappings but using a distributed simulation with the optimistic sequencing algorithm. We present a number of sufficiently complex examples of queueing networks, and compare the throughputs obtained from simulations with the upper bounds obtained analytically.
Finally, we study message flow processes in distributed simulators of open queueing networks with feedback. We develop and study queueing models for distributed simulators with maximum lookahead sequencing. We characterize the "external" arrival process, and the message feedback process in the simulator of a simple queueing network with feedback. We show that a certain "natural" modelling construct for the arrival process is exactly correct, whereas an "obvious" model for the feedback process is wrong; we then show how to develop the correct model. Our analysis throws light on the stability of distributed simulators of queueing networks with feedback. We show how the stability of such simulators depends on the parameters of the queueing network.
|
6 |
Speech Encryption Using Wavelet PacketsBopardikar, Ajit S 02 1900 (has links)
The aim of speech scrambling algorithms is to transform clear speech into an unintelligible signal so that it is difficult to decrypt it in the absence of the key.
Most of the existing speech scrambling algorithms tend to retain considerable residual intelligibility in the scrambled speech and are easy to break. Typically, a speech scrambling algorithm involves permutation of speech segments in time, frequency or time-frequency domain or permutation of transform coefficients of each speech block. The time-frequency algorithms have given very low residual intelligibility and have attracted much attention.
We first study the uniform filter bank based time-frequency scrambling algorithm with respect to the block length and number of channels. We use objective distance measures to estimate the departure of the scrambled speech from the clear speech. Simulations indicate that the distance measures increase as we increase the block length and the number of channels. This algorithm derives its security only from the time-frequency segment permutation and it has been estimated that the effective number of permutations which give a low residual intelligibility is much less than the total number of possible permutations.
In order to increase the effective number of permutations, we propose a time-frequency scrambling algorithm based on wavelet packets. By using different wavelet packet filter banks at the analysis and synthesis end, we add an extra level of security since the eavesdropper has to choose the correct analysis filter bank, correctly rearrange the time-frequency segments, and choose the correct synthesis bank to get back the original speech signal. Simulations performed with this algorithm give distance measures comparable to those obtained for the uniform filter bank based algorithm.
Finally, we introduce the 2-channel perfect reconstruction circular convolution filter bank and give a simple method for its design. The filters designed using this method satisfy the paraunitary properties on a discrete equispaced set of points in the frequency domain.
|
7 |
Synthesis of Arbitrary Antenna ArraysNagesh, S R 04 1900 (has links)
Design of antenna arrays for present day requirements has to take into account both mechanical and electrical aspects. Mechanical aspects demand the antennas to have low profile, non-protruding structures, structures compatible to aerodynamic requirements and so on. Electrical aspects may introduce several constraints either due to. technical reasons or due to readability conditions in practice. Thus, arrays of modern requirements may not fall into the category of linear or planar arrays. Further, due to the nearby environment, the elements will generate complicated individual patterns. These issues necessitate the analysis and synthesis of antenna arrays which are arbitrary as far as the orientation, position or the element pattern are concerned. Such arrays which may be called arbitrary arrays are being investigated in this thesis. These investigations have been discussed as different aspects as indicated below:
Radiation Characteristics of Arbitrary Arrays
Radiation fields of an arbitrarily oriented dipole are obtained. Such fields are plotted for typical cases. Further, methods for transforming the electromagnetic fields are discussed. Having obtained the field due to an arbitrary element, the fields due to an arbitrary array are obtained. Factors controlling the radiation fields, like, the curvature in the array and element pattern are investigated. Radiation patterns of circular and cylindrical arrays are plotted.
Synthesis of a Side Lobe Topography
Requirements of a narrow beam pattern generated by an antenna array are identified. A problem of synthesizing such a pattern using an arbitrary array is formulated. The envelope of the side lobe region which may be called, the side lobe topography (sit), is included in the computation of the covariance matrix. This problem which has been formulated as a problem of minimizing a quadratic function subjected to a system of linear constraints is solved by the method of Lagrangian multipliers. An iterative procedure is used to satisfy all the requirements of the pattern synthesis. The procedure has been validated by synthesizing linear arrays and is used to synthesize circular and parabolic arrays. Patterns with tapered sit, Taylor-like sit have been synthesized. Asymmetric patterns are also synthesized. Role of sit is brought out.
Shaped Beam Synthesis
Synthesis of shaped broad beams is discussed. Amplitude constraints are formulated. Phase distribution is linked with the phase centre. Quadratic problems thus formulated are solved by the Lagrangian method of undetermined multipliers. An iterative procedure is made use of to synthesize flat topped beams as well as cosecant squared-patterns using linear arrays as well as circular arrays. Reasonable excitation dynamic has been obtained. Optimum phase centres obtained by trial and error are made use of.
Effects of the Frequency and Excitation on the Synthesized Patterns
In general, synthesized patterns can be sensitive towards any specific parameter either excitation or to frequency or any such parameter. Several methods can be used to observe these issues. In this thesis, these effects are also studied. Using a specific array configuration, to synthesize a specified radiation pattern, frequency is changed by 10% from the design frequency and the pattern is computed. Similarly, excitation phase distribution is rounded to the nearest available phase distribution using a digital phase shifter (say 8 bit) and the resulting pattern is computed. Further, excitation dynamic is also controlled by boosting the amplitudes of the array elements which are less than the permissible (i.e. the maximum excitation/allowed dynamic). Effects of these variations are also recorded. It appears that reasonable patterns can be obtained, in spite of significant variations in these parameters in most of the cases.
Reconfigurable Arbitrary Arrays
It would be very useful if a single array configuration can be used for different ap-
plications. This may be either for the different phases of a single application or for different applications that may be required at different times. Attempts are made to synthesize a variety of patterns from a single array. Such arrays which may be called as reconfigurable arrays can be of much use. Obviously, the excitations are different for different patterns. Both narrow beams, as well as shaped broad beams, with different side lobe topographies have been synthesized using a single array.
|
8 |
Design, Fabrication and Characterization of Low Voltage Capacitive RF MEMS SwitchesShekhar, Sudhanshu January 2015 (has links) (PDF)
This dissertation presents the design, fabrication, and characterization of low-voltage capacitive RF MEMS switches. Although, RF MEMS switches have shown superior performance as compared to the existing solid-state semiconductor switches and are viable alternate to the present and the future communication systems, not been able to match the commercial standards due to their poor reliability. Dielectric charging due high actuation is one of the major concerns that limit the reliability of these switches. Hence, the focus of this thesis is on the development of low actuation voltage RF MEMS switches without compromising much on their RF and dynamic performances i.e., low insertion loss and high isolation. Four different switch topologies are studied and discussed. Electromechanical and electromagnetic modelling is presented to study the effect of various components that comprise a MEMS switch on the transient and the RF behaviour. The analytical expressions for switching and release times are established in order to estimate the switching and release times.
An in-house developed surface micromachining process is adapted for the micro fabrication. This process eliminates the need for an extra mask used for the anchors and restricts the overall process to four-masks only. These switches are fabricated on 500 µm thick glass substrate. A 0.5 µm thick gold film is used as the structural material. For the final release of the switch, chemical wet etching technique is employed.
The fabricated MEMS switches are characterized mechanically and electrically by measuring mechanical resonant frequency, quality factor, pull-in, and pull-up voltages. Since, low actuation voltage switches have slow response time. One of the key objectives of this thesis is to realize switches with fast response time at low actuation voltage. Measurements are performed to estimate the switching and release times. The measured Q-factors of switches are found to be in between 1.1 -1.4 which is the recommended value for Q in MEMS switches for a suppressed oscillation after the release. Furthermore, the effect of hole size on the switching dynamics is addressed. RF measurements are carried out to measure the S-parameters in order to quantify the RF performance.
The measured results demonstrate that these switches need low actuation voltage in range of 4.5 V to 8.5 V for the actuation. The measured insertion loss less than -0.8 dB and isolation better than 30 dB up to 40 GHz is reported.
In addition, the robustness of realized switches is tested using in-house developed Lab View-based automated measurement test set-up. The reliability test analysis shows no degradation in the RF performance even after 10 millions of switching cycles. Overall yield of 70 -80% is estimated in the present work. Finally, the experimentally measured results presented in this work prove the successful development of low actuation voltage capacitive RF MEMS switches and also offers that even with 0.5 µm thick gold film better reliability for MEMS switches can be achieved.
|
9 |
Optimal Mechanisms for Selling Two Heterogeneous ItemsThirumulanathan, D January 2017 (has links) (PDF)
We consider the problem of designing revenue-optimal mechanisms for selling two heterogeneous items to a single buyer. Designing a revenue-optimal mechanism for selling a single item is simple: Set a threshold price based on the distribution, and sell the item only when the buyer’s valuation exceeds the threshold. However, designing a revenue-optimal mechanism to sell two heterogeneous items is a harder problem. Even the simplest setting with two items and one buyer remains unsolved as yet. The partial characterizations available in the literature have succeeded in solving the problem largely for distributions that are bordered by the coordinate axes. We consider distributions that do not contain (0; 0) in their support sets. Specifically, we consider the buyer’s valuations to be distributed uniformly over arbitrary rectangles in the positive quadrant. We anticipate that the special cases we solve could be a guideline to un-derstand the methods to solve the general problem. We explore two different methods – the duality method and the virtual valuation method – and apply them to solve the problem for distributions that are not bordered by the coordinate axes. The thesis consists of two parts.
In the first part, we consider the problem when the buyer has no demand constraints. We assume the buyer’s valuations to be uniformly distributed over an arbitrary rectangle [c1; c1 + b1] [c2; c2 + b2] in the positive quadrant. We first study the duality approach that solves the problem for the (c1; c2) = (0; 0) case. We then nontrivially extend this approach to provide an explicit solution for arbitrary nonnegative values of (c1; c2; b1; b2). We prove that the optimal mechanism is to sell the two items according to one of eight simple menus. The menus indicate that the items must be sold individually for certain values of (c1; c2), the items must be bundled for certain other values, and the auction is an interplay of individual sale and a bundled sale for the remaining values of (c1; c2). We conjecture that our method can be extended to a wider class of distributions. We provide some preliminary results to support the conjecture.
In the second part, we consider the problem when the buyer has a unit-demand constraint. We assume the buyer’s valuations (z1; z2) to be uniformly distributed over an arbitrary rectangle [c; c + b1] [c; c + b2] in the positive quadrant, having its south-west corner on the line z1 = z2. We first show that the structure of the dual measure shows significant variations for different values of (c; b1; b2) which makes it hard to discover the correct dual measure, and hence to compute the solution. We then nontrivially extend the virtual valuation method to provide a complete, explicit solution for the problem considered. In particular, we prove that the optimal mechanism is structured into five simple menus. We then conjecture, with promising preliminary results, that the optimal mechanism when the valuations are uniformly distributed in an arbitrary rectangle [c1; c1 + b1] [c2; c2 + b2] is also structured according to similar menus.
|
10 |
Secret Key Generation in the Multiterminal Source Model : Communication and Other AspectsMukherjee, Manuj January 2017 (has links) (PDF)
This dissertation is primarily concerned with the communication required to achieve secret key (SK) capacity in a multiterminal source model. The multiterminal source model introduced by Csiszár and Narayan consists of a group of remotely located terminals with access to correlated sources and a noiseless public channel. The terminals wish to secure their communication by agreeing upon a group secret key. The key agreement protocol involves communicating over the public channel, and agreeing upon an SK secured from eavesdroppers listening to the public communication. The SK capacity, i.e., the maximum rate of an SK that can be agreed upon by the terminals, has been characterized by Csiszár and Narayan. Their capacity-achieving key generation protocol involved terminals communicating to attain omniscience, i.e., every terminal gets to recover the sources of the other terminals. While this is a very general protocol, it often requires larger rates of public communication than is necessary to achieve SK capacity.
The primary focus of this dissertation is to characterize the communication complexity, i.e., the minimum rate of public discussion needed to achieve SK capacity. A lower bound to communication complexity is derived for a general multiterminal source, although it turns out to be loose in general. While the minimum rate of communication for omniscience is always an upper bound to the communication complexity, we derive tighter upper bounds to communication complexity for a special class of multiterminal sources, namely, the hypergraphical sources. This upper bound yield a complete characterization of hypergraphical sources where communication for omniscience is a rate-optimal protocol for SK generation, i.e., the communication complexity equals the minimum rate of communication for omniscience.
Another aspect of the public communication touched upon by this dissertation is the necessity of omnivocality, i.e., all terminals communicating, to achieve the SK capacity. It is well known that in two-terminal sources, only one terminal communicating success to generate a maximum rate secret key. However, we are able to show that for three or more terminals, omnivocality is indeed required to achieve SK capacity if a certain condition is met. For the specific case of three terminals, we show that this condition is also necessary to ensure omnivocality is essential in generating a SK of maximal rate. However, this condition is no longer necessary when there are four or more terminals.
A certain notion of common information, namely, the Wyner common information, plays a central role in the communication complexity problem. This dissertation thus includes a study of multiparty versions of the two widely used notions of common information, namely, Wyner common information and Gács-Körner (GK) common information. While evaluating these quantities is difficult in general, we are able to derive explicit expressions for both types of common information in the case of hypergraphical sources.
We also study fault-tolerant SK capacity in this dissertation. The maximum rate of SK that can be generated even if an arbitrary subset of terminals drops out is called a fault-tolerant SK capacity. Now, suppose we have a fixed number of pairwise SKs. How should one distribute them amongpairs of terminals, to ensure good fault tolerance behavior in generating a groupSK? We show that the distribution of the pairwise keys according to a Harary graph provides a certain degree of fault tolerance, and bounds are obtained on its fault-tolerant SK capacity.
|
Page generated in 0.0979 seconds